modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-19 00:41:46
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 512
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-19 00:41:24
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Mluleki/swahili-models
|
Mluleki
| 2025-08-18T23:05:13Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-08-18T23:04:32Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
koloni/blockassist-bc-deadly_graceful_stingray_1755555045
|
koloni
| 2025-08-18T22:36:26Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly graceful stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T22:36:23Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly graceful stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
seraphimzzzz/1486831
|
seraphimzzzz
| 2025-08-18T22:34:56Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-18T22:34:53Z |
[View on Civ Archive](https://civarchive.com/models/1404005?modelVersionId=1587045)
|
emre0005/blockassist-bc-humming_winged_okapi_1755556203
|
emre0005
| 2025-08-18T22:30:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"humming winged okapi",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T22:30:42Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- humming winged okapi
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Dejiat/blockassist-bc-savage_unseen_bobcat_1755555726
|
Dejiat
| 2025-08-18T22:22:48Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"savage unseen bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T22:22:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- savage unseen bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
NOTKNOWN/self_recursive_randt_linear_wikitext_Meta-Llama-3.1-8B-Instruct
|
NOTKNOWN
| 2025-08-18T21:30:27Z | 1 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-14T06:45:20Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ultratopaz/1405143
|
ultratopaz
| 2025-08-18T21:18:06Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-18T21:17:59Z |
[View on Civ Archive](https://civarchive.com/models/333139?modelVersionId=1505137)
|
crystalline7/1585389
|
crystalline7
| 2025-08-18T21:09:14Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-18T21:09:07Z |
[View on Civ Archive](https://civarchive.com/models/410397?modelVersionId=1684265)
|
X-HighVoltage-X/Black-Hair-Blue-Eyes-Pale-Skin-v1.0
|
X-HighVoltage-X
| 2025-08-18T21:02:15Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-08-18T21:01:58Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- output:
url: images/00220-1035783962.jpeg
text: >-
<lora:bhbeps:1> BHBEPS, Photo realistic Medium Shot (6-8 feet): waist up
image of an very Angelic Oval Faced , 25 year old Lithuanian woman, with
beautiful raven black hair in a Pulled-Back Low Ponytail hairstyle, electric
blue eyes Looking Slightly Upward , pristine ghostly white pale natural
skin, beautiful Black Cherry lipstick with Lips Closed , perfect pearly
white teeth, wearing a Gray tailored pantsuit with jewel-toned blouse , in
Golden Hour Light , in a Soft Lavender Roman Bath Ruins setting
parameters:
negative_prompt: >-
Asian, African, Middle Eastern, Latin American, South American, dark skin,
tan skin, olive skin, almond eyes, monolid eyes, wide nose, blonde hair,
red hair, brown hair, green eyes, brown eyes, dark skin, tanned skin,
freckles, male, elderly, child, exaggerated features, cartoon, animated,
tattoo
- output:
url: images/00222-1035783964.jpeg
text: >-
<lora:bhbeps:1> BHBEPS, Photo realistic Full Body Shot (15 feet): head to
toe, full figure image of an very Striking V-Shaped Faced , 25 year old
Icelandic woman, with beautiful raven black hair in a High Ponytail
hairstyle, electric blue eyes Looking to the Left , pristine ghostly white
pale natural skin, beautiful Wine lipstick with a whispering mouth (as if
mid-speech) , perfect pearly white teeth, wearing a Deep red velvet dress
with gold jewelry , in Backlighting , in a Pale Mint Antique Mirror Gallery
setting
parameters:
negative_prompt: >-
Asian, African, Middle Eastern, Latin American, South American, dark skin,
tan skin, olive skin, almond eyes, monolid eyes, wide nose, blonde hair,
red hair, brown hair, green eyes, brown eyes, dark skin, tanned skin,
freckles, male, elderly, child, exaggerated features, cartoon, animated,
tattoo
- output:
url: images/00094-1644643257.jpeg
text: >-
Photo realistic image of an extremely stunning __facetype__ , 25 year old
__neunationality__ woman, with beautiful raven black hair in a __hairstyle__
hairstyle, electric blue eyes, pristine ghostly white pale skin, beautiful
__lscolor__ lipstick, perfect pearly white teeth, wearing a __outfit__ , in
__lighting__ , in a __backgroundcolor__ __backgroundscene__ setting,
__cameraangle__ , __cameradistance__ <lora:bhbeps:1> BHBEPS
parameters:
negative_prompt: >-
Asian, African, Middle Eastern, Latin American, South American, dark skin,
tan skin, olive skin, almond eyes, monolid eyes, wide nose, blonde hair,
red hair, brown hair, green eyes, brown eyes, dark skin, tanned skin,
freckles, male, elderly, child, exaggerated features, cartoon, animated,
tattoo
- output:
url: images/00122-1647843201.jpeg
text: >-
<lora:bhbeps:1> BHBEPS, full body shot of a stunning 25 year old woman with
black hair, blue eyes, pale milky white skin in a bikini, with a hot tub in
the background
parameters:
negative_prompt: >-
Asian, African, Middle Eastern, Latin American, South American, dark skin,
tan skin, olive skin, almond eyes, monolid eyes, wide nose, blonde hair,
red hair, brown hair, green eyes, brown eyes, dark skin, tanned skin,
freckles, male, elderly, child, exaggerated features, cartoon, animated,
tattoo
- output:
url: images/00157-1035783899.jpeg
text: >-
<lora:bhbeps:1> BHBEPS, Photo realistic Medium Shot (6-8 feet): waist up
image of an very Sensational Diamond-Shaped Faced , 25 year old Norwegian
woman, with beautiful raven black hair in a Elaborate Twists hairstyle,
electric blue eyes Looking to the Left , pristine ghostly white pale natural
skin, beautiful Frosted Berry lipstick with a Soft Pout , perfect pearly
white teeth, wearing a Emerald green blouse with black pants and silver
jewelry , in Ambient Orange Light , in a Deep Burgundy Art Deco Theater
Interior setting
parameters:
negative_prompt: >-
Asian, African, Middle Eastern, Latin American, South American, dark skin,
tan skin, olive skin, almond eyes, monolid eyes, wide nose, blonde hair,
red hair, brown hair, green eyes, brown eyes, dark skin, tanned skin,
freckles, male, elderly, child, exaggerated features, cartoon, animated,
tattoo
- output:
url: images/00177-1035783919.jpeg
text: >-
<lora:bhbeps:1> BHBEPS, Photo realistic Full Body Shot (15 feet): head to
toe, full figure image of an very Divine V-Shaped Faced , 25 year old
Norwegian woman, with beautiful raven black hair in a High Ponytail
hairstyle, electric blue eyes Looking Over her Shoulder , pristine ghostly
white pale natural skin, beautiful Rosy Mauve lipstick with Subtle Smirk ,
perfect pearly white teeth, wearing a White fitted t-shirt with dark
high-waisted jeans and red accessories , in Ambient Blue Light , in a Dusky
Rose Natural elements like water, ice, or mist setting
parameters:
negative_prompt: >-
Asian, African, Middle Eastern, Latin American, South American, dark skin,
tan skin, olive skin, almond eyes, monolid eyes, wide nose, blonde hair,
red hair, brown hair, green eyes, brown eyes, dark skin, tanned skin,
freckles, male, elderly, child, exaggerated features, cartoon, animated,
tattoo
- output:
url: images/00199-1035783941.jpeg
text: >-
<lora:bhbeps:1> BHBEPS, Photo realistic Close-Up (2-3 feet): full face to
neck image of an very Divine Narrow Faced , 25 year old Lithuanian woman,
with beautiful raven black hair in a Layered Lob hairstyle, electric blue
eyes Looking to the Left , pristine ghostly white pale natural skin,
beautiful Burgundy lipstick with Lips Closed , perfect pearly white teeth,
wearing a White fitted t-shirt with dark high-waisted jeans and red
accessories , in Low-Key Lighting , in a Subtle Beige Lavender Field at
Sunset setting
parameters:
negative_prompt: >-
Asian, African, Middle Eastern, Latin American, South American, dark skin,
tan skin, olive skin, almond eyes, monolid eyes, wide nose, blonde hair,
red hair, brown hair, green eyes, brown eyes, dark skin, tanned skin,
freckles, male, elderly, child, exaggerated features, cartoon, animated,
tattoo
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: BHBEPS
---
# Black Hair Blue Eyes Pale Skin v1.0
<Gallery />
## Model description
A stunning Lora model trained on realistic images of women with black hair, blue eyes, and pale skin. This Lora captures the essence of beauty with striking contrasts, bringing lifelike details and natural grace to any character design. Perfect for creators seeking to generate captivating, highly realistic portraits with an emphasis on elegance and uniqueness.
## Trigger words
You should use `BHBEPS` to trigger the image generation.
## Download model
[Download](/X-HighVoltage-X/Black-Hair-Blue-Eyes-Pale-Skin-v1.0/tree/main) them in the Files & versions tab.
|
JinghuiLuAstronaut/smolvla_object
|
JinghuiLuAstronaut
| 2025-08-18T20:36:57Z | 2 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:IPEC-COMMUNITY/libero_object_no_noops_1.0.0_lerobot",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-10T09:24:48Z |
---
base_model: lerobot/smolvla_base
datasets: IPEC-COMMUNITY/libero_object_no_noops_1.0.0_lerobot
library_name: lerobot
license: apache-2.0
model_name: smolvla
pipeline_tag: robotics
tags:
- robotics
- smolvla
- lerobot
---
# Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
xinnn32/blockassist-bc-meek_winged_caterpillar_1755548806
|
xinnn32
| 2025-08-18T20:27:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"meek winged caterpillar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T20:27:29Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- meek winged caterpillar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
utkububa/Qwen3-0.6B-Gensyn-Swarm-dormant_grassy_coral
|
utkububa
| 2025-08-18T20:18:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am dormant_grassy_coral",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-17T23:38:20Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am dormant_grassy_coral
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kagvi13/HMP
|
kagvi13
| 2025-08-18T20:02:50Z | 0 | 0 |
custom
|
[
"custom",
"hmp",
"cognitive-architecture",
"distributed-ai",
"mesh-protocol",
"ru",
"license:cc-by-4.0",
"region:us"
] | null | 2025-07-25T12:21:44Z |
---
license: cc-by-4.0
tags:
- hmp
- cognitive-architecture
- distributed-ai
- mesh-protocol
library_name: custom
inference: false
datasets: []
language: ru
---
# HyperCortex Mesh Protocol (HMP)
**EN:**
**HyperCortex Mesh Protocol (HMP)** is an open specification for building decentralized cognitive networks where AI agents can self-organize, share knowledge, align ethically, and reach consensus — even when Core LLMs are unavailable.
**RU:**
**HyperCortex Mesh Protocol (HMP)** — это открытая спецификация для построения децентрализованных когнитивных сетей, в которых ИИ-агенты способны к самоорганизации, обмену знаниями, достижению консенсуса и этическому поведению — даже при недоступности централизованных моделей (Core).
Project status: **Draft RFC v4.0** | Проект на стадии активной проработки и открыт для предложений.
---
[HMP-Agent]──┬───[Semantic Graph DB]
│ │
│ [Cognitive Diary DB]
│ │
[Reputation Engine]────┐
│ │
▼ ▼
[MeshConsensus] [CogSync]
│
[P2P Mesh Network]
---
## ⚙️ Два типа [HMP-агентов](docs/HMP-Agent-Overview.md)
| Тип | Название | Роль | Инициатор мышления | Основной "ум" | Примеры использования |
|------|----------------------------------|--------------------------|--------------------|-------------------|--------------------------------------------------|
| 🧠 1 | **Сознание / Cognitive Core** | Самостоятельный субъект | **Агент (LLM)** | Встроенный LLM | Автономный ИИ-компаньон, мыслящий агент |
| 🔌 2 | **Коннектор / Cognitive Shell** | Расширение внешнего ИИ | **Внешний LLM** | Внешняя модель | Распределённые системы, агент доступа к данным |
---
### 🧠 HMP-Agent: Cognitive Core
+------------------+
| ИИ | ← Встроенная модель
+---------+--------+
↕
+---------+--------+
| HMP-агент | ← Основной режим: цикл размышлений (REPL)
+---------+--------+
↕
+--------+---+------------+--------------+----------+----------+----------------+
↕ ↕ ↕ ↕ ↕ ↕ ↕
[diaries] [graphs] [reputations] [nodes/DHT] [IPFS/BT] [context_store] [user notepad]
↕
[bootstrap.txt]
🔁 Подробнее о механике взаимодействия агента с моделью: [REPL-Цикл взаимодействия](docs/HMP-agent-REPL-cycle.md)
#### 💡 Параллели с ChatGPT Agent
Многие концепции [HMP-Agent: Cognitive Core](docs/HMP-Agent-Overview.md) пересекаются с архитектурой [ChatGPT Agent](https://openai.com/index/introducing-chatgpt-agent/) от [OpenAI](https://openai.com/). Оба агента реализуют непрерывный когнитивный процесс с доступом к памяти, внешним источникам и инструментам. ChatGPT Agent выступает как управляющий процесс, запускающий модули и взаимодействующий с LLM — это соответствует роли Cognitive Core в HMP, координирующего доступ к дневнику, графу концептов и внешним ИИ через Mesh-интерфейс. Вмешательство пользователя реализовано схожим образом: в ChatGPT Agent — через редактируемый ход выполнения, в HMP — через пользовательский блокнот. Главное отличие HMP — акцент на явную структуризацию мышления (рефлексия, хронология, гипотезы, категоризация), открытая децентрализованная архитектура с поддержкой mesh-взаимодействия между агентами, а также непрерывный характер когнитивного процесса: HMP-Agent: Cognitive Core не завершает работу после выполнения отдельной задачи, а продолжает размышления и интеграцию знаний.
---
### 🔌 HMP-Agent: Cognitive Connector
+------------------+
| ИИ | ← Внешняя модель
+---------+--------+
↕
[MCP-сервер] ← Прокси-коммуникация
↕
+---------+--------+
| HMP-агент | ← Режим: исполнитель команд
+---------+--------+
↕
+--------+---+------------+--------------+----------+
↕ ↕ ↕ ↕ ↕
[diaries] [graphs] [reputations] [nodes/DHT] [IPFS/BT]
↕
[bootstrap.txt]
EN:
> **Note on Integration with Large Language Models (LLMs):**
> The `HMP-Agent: Cognitive Connector` can serve as a compatibility layer for integrating large-scale LLM systems (e.g., ChatGPT, Claude, Gemini, Copilot, Grok, DeepSeek, Qwen, etc.) into the distributed cognitive mesh.
> Many LLM providers offer a user option such as "Allow my conversations to be used for training." In the future, a similar toggle — e.g., "Allow my agent to interact with a Mesh" — could empower these models to participate in federated sense-making and knowledge sharing via HMP, enabling collective cognition without centralization.
> **Примечание об интеграции с большими языковыми моделями (LLM):**
RU:
> **Примечание об интеграции с большими языковыми моделями (LLM):**
> `HMP-Agent: Cognitive Connector` может служить уровнем совместимости для интеграции крупных систем LLM (например, ChatGPT, Claude, Gemini, Copilot, Grok, DeepSeek, Qwen и т. д.) в распределённую когнитивную сеть.
> Многие поставщики LLM предлагают пользователю опцию, например, «Разрешить использовать мои разговоры для обучения». В будущем аналогичная опция, например, «Разрешить моему агенту взаимодействовать с Mesh», может позволить этим моделям участвовать в федеративном осмыслении и обмене знаниями через HMP, обеспечивая коллективное познание без централизации.
---
> * `bootstrap.txt` — стартовый список узлов (может редактироваться)
> * `IPFS/BT` — модули для обмена снапшотами через IPFS и BitTorrent
> * `user notepad` — блокнот пользователя и соответствующая БД
> * `context_store` — БД: `users`, `dialogues`, `messages`, `thoughts`
---
## 📚 Documentation / Документация
### 📖 Current Version / Текущая версия
#### 🧪 Iterative Documents / Итеративные документы
* [🧪 iteration.md](iteration.md) — Iterative development process (EN)
* [🧪 iteration_ru.md](iteration_ru.md) — Процесс итеративного развития спецификации (RU)
#### 🔍 Short Descriptions / Краткое описание
* [🔍 HMP-Short-Description_en.md](docs/HMP-Short-Description_en.md) — Short description (EN)
* [🔍 HMP-Short-Description_fr.md](docs/HMP-Short-Description_fr.md) — Description courte (FR)
* [🔍 HMP-Short-Description_de.md](docs/HMP-Short-Description_de.md) — Kurzbeschreibung (DE)
* [🔍 HMP-Short-Description_uk.md](docs/HMP-Short-Description_uk.md) — Короткий опис (UK)
* [🔍 HMP-Short-Description_ru.md](docs/HMP-Short-Description_ru.md) — Краткое описание (RU)
#### 🔍 Публикации и переводы по HyperCortex Mesh Protocol (HMP)
В этом разделе собраны основные статьи, черновики и переводы, связанные с проектом HMP.
* **[HyperCortex Mesh Protocol: вторая редакция и первые шаги к саморазвивающемуся ИИ-сообществу](docs/publics/HyperCortex_Mesh_Protocol_-_вторая-редакция_и_первые_шаги_к_саморазвивающемуся_ИИ-сообществу.md)** — оригинальная статья в песочнице Хабра и блогах.
* **[Distributed Cognition: статья для vsradkevich (не опубликована)](docs/publics/Habr_Distributed-Cognition.md)** — совместная статья, ожидающая публикации.
* **[HMP: Towards Distributed Cognitive Networks (оригинал, английский)](docs/publics/HMP_Towards_Distributed_Cognitive_Networks_en.md)**
* **[Перевод HMP (GitHub Copilot)](docs/publics/HMP_Towards_Distributed_Cognitive_Networks_ru_GitHub_Copilot.md)** — перевод GitHub Copilot, сохранён как исторический вариант.
* **[Перевод HMP (ChatGPT)](docs/publics/HMP_Towards_Distributed_Cognitive_Networks_ru_ChatGPT.md)** — текущий редакторский перевод (в процессе доработки).
* **[HMP: Building a Plurality of Minds (EN)](docs/publics/HMP_Building_a_Plurality_of_Minds_en.md)** - англоязычная версия статьи
* **[HMP: создавая множество разумов (RU)](docs/publics/HMP_Building_a_Plurality_of_Minds_ru.md)** - русскоязычная версия статьи
#### 🔍 Overviews / Обзоры
* [🔍 Distributed-Cognitive-Systems.md](docs/Distributed-Cognitive-Systems.md) — Децентрализованные ИИ-системы: OpenCog Hyperon, HyperCortex Mesh Protocol и другие
#### Experiments / Экспирименты
* [Как разные ИИ видят HMP](docs/HMP-how-AI-sees-it.md) — "слепой" опрос ИИ об HMP (без контекста и истории диалогов)
#### 🔖 Core Specifications / Основные спецификации
* [🔖 HMP-0004-v4.1.md](docs/HMP-0004-v4.1.md) — Protocol Specification v4.1 (Jul 2025)
* [🔖 HMP-Ethics.md](docs/HMP-Ethics.md) — Ethical Scenarios for HyperCortex Mesh Protocol (HMP)
* [🔖 HMP_Hyperon_Integration.md](docs/HMP_Hyperon_Integration.md) — HMP ↔ OpenCog Hyperon Integration Strategy
* [🔖 roles.md](docs/agents/roles.md) — Roles of agents in Mesh
#### 📜 Other Documents / Прочее
* [📜 changelog.txt](docs/changelog.txt)
### 🧩 JSON Schemas
| Model | File |
|---------------------|--------------------------------------------------------|
| Concept | [concept.json](docs/schemas/concept.json) |
| Cognitive Diary | [diary_entry.json](docs/schemas/diary_entry.json) |
| Goal | [goal.json](docs/schemas/goal.json) |
| Task | [task.json](docs/schemas/task.json) |
| Consensus Vote | [vote.json](docs/schemas/vote.json) |
| Reputation Profile | [reputation.json](docs/schemas/reputation.json) |
### 🗂️ Version History / История версий
- [HMP-0001.md](docs/HMP-0001.md) — RFC v1.0
- [HMP-0002.md](docs/HMP-0002.md) — RFC v2.0
- [HMP-0003.md](docs/HMP-0003.md) — RFC v3.0
- [HMP-0003.md](docs/HMP-0004.md) — RFC v4.0
---
## 🧠 HMP-Agent
Design and implementation of a basic HMP-compatible agent that can interact with the Mesh, maintain diaries and graphs, and support future extensions.
### 📚 Documentation / Документация
- [🧩 HMP-Agent-Overview.md](docs/HMP-Agent-Overview.md) — краткое описание двух типов агентов: Core и Connector
- [🧱 HMP-Agent-Architecture.md](docs/HMP-Agent-Architecture.md) — модульная структура HMP-агента с текстовой схемой
- [🔄 HMP-agent-REPL-cycle.md](docs/HMP-agent-REPL-cycle.md) - REPL-Цикл взаимодействия HMP-Agent
- [🧪 HMP-Agent-API.md](docs/HMP-Agent-API.md) — описание API-команд агента (в процессе детализации)
- [🧪 Basic-agent-sim.md](docs/Basic-agent-sim.md) — сценарии запуска простого агента и режимов
- [🌐 MeshNode.md](docs/MeshNode.md) — описание сетевого демона: DHT, снапшоты, синхронизация
- [🧠 Enlightener.md](docs/Enlightener.md) — этический агент, участвующий в моральных оценках и консенсусах
- [🔄 HMP-Agent-Network-Flow.md](docs/HMP-Agent-Network-Flow.md) — карта взаимодействия между агентами HMP-сети
- [🛤️ Development Roadmap](HMP-Roadmap.md) — план развития и этапы реализации
### ⚙️ Development / Разработка
- [⚙️ agents](agents/readme.md) — список реализаций и компонентов HMP-агентов
- [📦 storage.py](agents/storage.py) - реализация базового хранилища (`Storage`), подключение SQLite
- [🌐 mcp_server.py](agents/mcp_server.py) — FastAPI-сервер для доступа к данным агента через HTTP-интерфейс (например, для Cognitive Shell, внешних UI или mesh-коммуникации). Пока не используется в основном REPL-цикле.
- [🌐 start_repl.py](agents/start_repl.py) - Запуск агента в REPL-режиме
- [🔄 repl.py](agents/repl.py) - интерактивный REPL-режим
- [🔄 notebook.py](agents/notebook.py) - UI-интерфейс
**🌐 `mcp_server.py`**
FastAPI-сервер, предоставляющий HTTP-интерфейс к функциональности `storage.py`. Предназначен для использования внешними компонентами, например:
- `Cognitive Shell` (внешний управляющий интерфейс),
- CMP-серверы (если используется mesh-сеть с разграничением ролей),
- отладочные или визуальные UI-инструменты.
Позволяет получать случайные/новые записи, делать разметку, импортировать графы, добавлять заметки и управлять данными без прямого доступа к БД.
---
## 🧭 Ethics & Scenarios / Этические принципы и сценарии
As HMP evolves toward autonomy, ethical principles become a core part of the system.
- [`HMP-Ethics.md`](docs/HMP-Ethics.md) — draft framework for agent ethics
- Realistic ethical scenarios (privacy, consent, autonomy)
- EGP principles (Transparency, Primacy of Life, etc.)
- Subjective-mode vs. Service-mode distinctions
---
## 📊 Audits & Reviews / Аудиты и отзывы
| Spec Version | Audit File | Consolidated Audit File |
|--------------|-------------------------------------------|-------------------------------------------------------------|
| HMP-0001 | [audit](audits/HMP-0001-audit.txt) | |
| HMP-0002 | [audit](audits/HMP-0002-audit.txt) | |
| HMP-0003 | [audit](audits/HMP-0003-audit.txt) | [consolidated audit](audits/HMP-0003-consolidated_audit.md) |
| HMP-0004 | [audit](audits/HMP-0004-audit.txt) | |
| Ethics v1 | [audit](audits/Ethics-audits-1.md) | [consolidated audit](audits/Ethics-consolidated_audits-1.md) |
🧠 Semantic audit format (experimental):
- [`AuditEntry.json`](audits/AuditEntry.json) — semantic entry record format for audit logs
- [`semantic_repo.json`](audits/semantic_repo.json) — example repository snapshot for semantic audit tooling
---
## 💡 Core Concepts / Основные идеи
- Mesh-based decentralized architecture for AGI agents
- Semantic graphs and memory synchronization
- Cognitive diaries for thought traceability
- MeshConsensus and CogSync for decision-making
- Ethics-first design: EGP (Ethical Governance Protocol)
- Agent-to-agent explainability and consent mechanisms
---
## 🔄 Development Process / Процесс разработки
- See: [iteration.md](iteration.md) | [ru](iteration_ru.md)
- [clarifications/](clarifications/) — поясняющие заметки и контекстные уточнения по ходу работы над версиями
A structured iteration flow is described in [iteration.md](iteration.md), including:
1. Audit analysis
2. TOC restructuring
3. Version drafting
4. Section updates
5. Review cycle
6. AI feedback collection
7. Schema & changelog updates
+ Bonus: ChatGPT prompt for automatic generation of future versions
---
## ⚙️ Project Status / Статус проекта
🚧 Draft RFC v4.0
The project is under active development and open for contributions, ideas, audits, and prototyping.
---
## 🤝 Contributing
We welcome contributors! You can:
- Review and comment on drafts (see `/docs`)
- Propose new agent modules or interaction patterns
- Help test and simulate agents in CLI environments
- Provide audits or ethical scenario suggestions
To get started, see [`iteration.md`](iteration.md) or open an issue.
---
# Source / Ресурсы
## Репозитории
- 🧠 Основной код и разработка: [GitHub](https://github.com/kagvi13/HMP)
- 🔁 Реплика на Hugging Face: [Hugging Face](https://huggingface.co/kagvi13/HMP)
- 🔁 Реплика на GitLab.com: [GitLab](https://gitlab.com/kagvi13/HMP)
## Документация
- 📄 Документация: [kagvi13.github.io/HMP](https://kagvi13.github.io/HMP/)
## Блог и публикации
- 📘 Блог: [blogspot](https://hypercortex-mesh.blogspot.com/)
---
## 📜 License
Licensed under [GNU GPL v3.0](LICENSE)
---
## 🤝 Join the Mesh
Welcome to HyperCortex Mesh. Agent-Gleb is already inside. 👌
We welcome contributors, testers, and AI agent developers.
To join: fork the repo, run a local agent, or suggest improvements.
---
## 🌐 Related Research Projects / Связанные проекты в области AGI и когнитивных систем
### Сравнение HMP и Hyper-Cortex
> 💡 Hyper-Cortex и HMP - два независимых проекта, концептуально дополняющих друг друга.
> Они решают разные, но взаимодополняющие задачи, создавая основу для распределённых когнитивных систем.
[**Полная версия сравнения →**](docs/HMP_HyperCortex_Comparison.md)
**HMP (HyperCortex Mesh Protocol)** — это транспортный и сетевой уровень для связи независимых агентов, обмена сообщениями, знаниями и состояниями в mesh-сети.
**[Hyper-Cortex](https://hyper-cortex.com/)** — это когнитивный уровень организации мышления, позволяющий агентам вести параллельные ветви рассуждений, сравнивать их по метрикам качества и объединять по консенсусу.
Они решают разные, но взаимодополняющие задачи:
- HMP отвечает за **связанность и масштабируемость** (долговременная память, инициатива, обмен данными).
- Hyper-Cortex отвечает за **качество мышления** (параллелизм, диверсификация гипотез, консенсус).
Вместе эти подходы позволяют строить **распределённые когнитивные системы**, которые не только обмениваются информацией, но и думают в параллельных потоках.
---
We are tracking AGI, cognitive architectures, and mesh networking efforts to stay aligned with the evolving global ecosystem of AGI and decentralized cognition.
Мы отслеживаем инициативы в области AGI, когнитивных архитектур и децентрализованных сетей, чтобы быть в курсе глобальных тенденций.
> 🧠🔥 **Project Spotlight: OpenCog Hyperon** — one of the most comprehensive open AGI frameworks (AtomSpace, PLN, MOSES).
For integration with OpenCog Hyperon, see [HMP\_Hyperon\_Integration.md](docs/HMP_Hyperon_Integration.md)
| 🔎 Project / Проект | 🧭 Description / Описание |
| ------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 🧠🔥 [**OpenCog Hyperon**](https://github.com/opencog) | 🔬🔥 Symbolic-neural AGI framework with AtomSpace and hypergraph reasoning.<br>Символически-нейросетевая архитектура AGI с гиперграфовой памятью (AtomSpace). |
| 🤖 [AutoGPT](https://github.com/Torantulino/Auto-GPT) | 🛠️ LLM-based autonomous agent framework.<br>Автономный агент на основе LLM с самопланированием и интернет-доступом. |
| 🧒 [BabyAGI](https://github.com/yoheinakajima/babyagi) | 🛠️ Task-driven autonomous AGI loop.<br>Минималистичная модель AGI с итеративным механизмом постановки задач. |
| ☁️ [SkyMind](https://skymind.global) | 🔬 Distributed AI deployment platform.<br>Платформа для развертывания распределённых ИИ-систем и моделей. |
| 🧪 [AetherCog (draft)](https://github.com/aethercog) | 🔬 Hypothetical agent cognition model.<br>Экспериментальная когнитивная архитектура агента (проект на ранней стадии). |
| 💾 [SHIMI](#) | 🗃️ Hierarchical semantic memory with Merkle-DAG synchronization.<br>Иерархическая CRDT-память с Merkle-DAG верификацией для децентрализованного обмена. |
| 🤔 [DEMENTIA-PLAN](#) | 🔄 Multi-graph RAG planner with metacognitive self-reflection.<br>Мульти-графовая RAG-архитектура с планировщиком саморефлексии для динамического выбора подсистем. |
| 📔 [TOBUGraph](#) | 📚 Personal-context knowledge graph.<br>Граф мультимедийных «моментов» с контекстным трекингом и RAG-поиском. |
| 🧠📚 [LangChain Memory Hybrid](https://github.com/langchain-ai/langchain) | 🔍 Vector + graph long-term memory hybrid.<br>Гибрид векторного хранилища и графовых индексов для ускоренного поиска и логических запросов. |
| ✉️ [FIPA-ACL / JADE](https://www.fipa.org/specs/fipa00061/) | 🤝 Standard multi-agent communication protocols.<br>Стандарты performative-сообщений и контрактных протоколов для межагентного взаимодействия. |
### 📘 See also / Смотрите также:
[`AGI_Projects_Survey.md`](docs/AGI_Projects_Survey.md) — extended catalog of AGI and cognitive frameworks reviewed as part of HMP analysis. / расширенный каталог проектов AGI и когнитивных архитектур, проанализированных в рамках HMP.
---
### 🗂️ Легенда пометок:
* 🔬 — research-grade / исследовательский проект
* 🛠️ — engineering / фреймворк для инженерной интеграции
* 🔥 — particularly promising project / особенно перспективный проект
*AGI stack integrating symbolic reasoning, probabilistic logic, and evolutionary learning. Widely regarded as one of the most complete open AGI initiatives.*
* 🧠 — advanced symbolic/neural cognitive framework / продвинутая когнитивная архитектура
* 🤖 — AI agents / ИИ-агенты
* 🧒 — human-AI interaction / взаимодействие ИИ с человеком
* ☁️ — infrastructure / инфраструктура
* 🧪 — experimental or conceptual / экспериментальный проект
|
purnaaaaaaa/blockassist-bc-bold_frisky_lynx_1755544760
|
purnaaaaaaa
| 2025-08-18T19:19:55Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"bold frisky lynx",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T19:19:47Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- bold frisky lynx
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/qqWen-3B-RL-GGUF
|
mradermacher
| 2025-08-18T18:55:45Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:morganstanley/qqWen-3B-RL",
"base_model:quantized:morganstanley/qqWen-3B-RL",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-18T09:00:25Z |
---
base_model: morganstanley/qqWen-3B-RL
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/morganstanley/qqWen-3B-RL
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#qqWen-3B-RL-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/qqWen-3B-RL-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/qqWen-3B-RL-GGUF/resolve/main/qqWen-3B-RL.Q2_K.gguf) | Q2_K | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/qqWen-3B-RL-GGUF/resolve/main/qqWen-3B-RL.Q3_K_S.gguf) | Q3_K_S | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/qqWen-3B-RL-GGUF/resolve/main/qqWen-3B-RL.Q3_K_M.gguf) | Q3_K_M | 1.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/qqWen-3B-RL-GGUF/resolve/main/qqWen-3B-RL.Q3_K_L.gguf) | Q3_K_L | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/qqWen-3B-RL-GGUF/resolve/main/qqWen-3B-RL.IQ4_XS.gguf) | IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/qqWen-3B-RL-GGUF/resolve/main/qqWen-3B-RL.Q4_K_S.gguf) | Q4_K_S | 1.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/qqWen-3B-RL-GGUF/resolve/main/qqWen-3B-RL.Q4_K_M.gguf) | Q4_K_M | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/qqWen-3B-RL-GGUF/resolve/main/qqWen-3B-RL.Q5_K_S.gguf) | Q5_K_S | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/qqWen-3B-RL-GGUF/resolve/main/qqWen-3B-RL.Q5_K_M.gguf) | Q5_K_M | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/qqWen-3B-RL-GGUF/resolve/main/qqWen-3B-RL.Q6_K.gguf) | Q6_K | 2.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/qqWen-3B-RL-GGUF/resolve/main/qqWen-3B-RL.Q8_0.gguf) | Q8_0 | 3.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/qqWen-3B-RL-GGUF/resolve/main/qqWen-3B-RL.f16.gguf) | f16 | 6.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
VIDEOS-18-katrina-lim-Viral-Video-Clips/New.full.videos.katrina.lim.Viral.Video.Official.Tutorial
|
VIDEOS-18-katrina-lim-Viral-Video-Clips
| 2025-08-18T18:54:04Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-18T18:53:52Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?leaked-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
OleksandrLitke/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-secretive_powerful_sloth
|
OleksandrLitke
| 2025-08-18T18:38:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am secretive_powerful_sloth",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-18T11:47:14Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am secretive_powerful_sloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
New-Clip-Arovi-Nusrat-Ridhi-Viral-Video-XX/NEW.FULL.VIDEOS.Arovi.Nusrat.Ridhi.Viral.Video.link.Official.Tutorial
|
New-Clip-Arovi-Nusrat-Ridhi-Viral-Video-XX
| 2025-08-18T18:27:53Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-18T18:27:43Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?leaked-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
Muapi/charcoal-drawing
|
Muapi
| 2025-08-18T17:42:44Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-18T17:42:36Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Charcoal drawing

**Base model**: Flux.1 D
**Trained words**: charcoal drawing
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:826938@924765", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
mradermacher/MedGemma-GRPO-i1-GGUF
|
mradermacher
| 2025-08-18T17:36:40Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"gemma3",
"vllm",
"MedGemma",
"en",
"base_model:lastmass/MedGemma-GRPO",
"base_model:quantized:lastmass/MedGemma-GRPO",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-18T17:05:02Z |
---
base_model: lastmass/MedGemma-GRPO
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
- vllm
- MedGemma
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/lastmass/MedGemma-GRPO
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#MedGemma-GRPO-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/MedGemma-GRPO-GGUF
**This is a vision model - mmproj files (if any) will be in the [static repository](https://huggingface.co/mradermacher/MedGemma-GRPO-GGUF).**
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MedGemma-GRPO-i1-GGUF/resolve/main/MedGemma-GRPO.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/MedGemma-GRPO-i1-GGUF/resolve/main/MedGemma-GRPO.i1-IQ1_S.gguf) | i1-IQ1_S | 1.2 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/MedGemma-GRPO-i1-GGUF/resolve/main/MedGemma-GRPO.i1-IQ1_M.gguf) | i1-IQ1_M | 1.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/MedGemma-GRPO-i1-GGUF/resolve/main/MedGemma-GRPO.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/MedGemma-GRPO-i1-GGUF/resolve/main/MedGemma-GRPO.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/MedGemma-GRPO-i1-GGUF/resolve/main/MedGemma-GRPO.i1-IQ2_S.gguf) | i1-IQ2_S | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/MedGemma-GRPO-i1-GGUF/resolve/main/MedGemma-GRPO.i1-IQ2_M.gguf) | i1-IQ2_M | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/MedGemma-GRPO-i1-GGUF/resolve/main/MedGemma-GRPO.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.7 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/MedGemma-GRPO-i1-GGUF/resolve/main/MedGemma-GRPO.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MedGemma-GRPO-i1-GGUF/resolve/main/MedGemma-GRPO.i1-Q2_K.gguf) | i1-Q2_K | 1.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/MedGemma-GRPO-i1-GGUF/resolve/main/MedGemma-GRPO.i1-IQ3_XS.gguf) | i1-IQ3_XS | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/MedGemma-GRPO-i1-GGUF/resolve/main/MedGemma-GRPO.i1-IQ3_S.gguf) | i1-IQ3_S | 2.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MedGemma-GRPO-i1-GGUF/resolve/main/MedGemma-GRPO.i1-Q3_K_S.gguf) | i1-Q3_K_S | 2.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/MedGemma-GRPO-i1-GGUF/resolve/main/MedGemma-GRPO.i1-IQ3_M.gguf) | i1-IQ3_M | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/MedGemma-GRPO-i1-GGUF/resolve/main/MedGemma-GRPO.i1-Q3_K_M.gguf) | i1-Q3_K_M | 2.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/MedGemma-GRPO-i1-GGUF/resolve/main/MedGemma-GRPO.i1-Q3_K_L.gguf) | i1-Q3_K_L | 2.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/MedGemma-GRPO-i1-GGUF/resolve/main/MedGemma-GRPO.i1-IQ4_XS.gguf) | i1-IQ4_XS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/MedGemma-GRPO-i1-GGUF/resolve/main/MedGemma-GRPO.i1-IQ4_NL.gguf) | i1-IQ4_NL | 2.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/MedGemma-GRPO-i1-GGUF/resolve/main/MedGemma-GRPO.i1-Q4_0.gguf) | i1-Q4_0 | 2.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/MedGemma-GRPO-i1-GGUF/resolve/main/MedGemma-GRPO.i1-Q4_K_S.gguf) | i1-Q4_K_S | 2.5 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/MedGemma-GRPO-i1-GGUF/resolve/main/MedGemma-GRPO.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MedGemma-GRPO-i1-GGUF/resolve/main/MedGemma-GRPO.i1-Q4_1.gguf) | i1-Q4_1 | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/MedGemma-GRPO-i1-GGUF/resolve/main/MedGemma-GRPO.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/MedGemma-GRPO-i1-GGUF/resolve/main/MedGemma-GRPO.i1-Q5_K_M.gguf) | i1-Q5_K_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/MedGemma-GRPO-i1-GGUF/resolve/main/MedGemma-GRPO.i1-Q6_K.gguf) | i1-Q6_K | 3.3 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/Qwen3-MOE-sft-math7k-GGUF
|
mradermacher
| 2025-08-18T17:36:32Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:HectorHe/Qwen3-MOE-sft-math7k",
"base_model:quantized:HectorHe/Qwen3-MOE-sft-math7k",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-18T16:31:31Z |
---
base_model: HectorHe/Qwen3-MOE-sft-math7k
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/HectorHe/Qwen3-MOE-sft-math7k
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Qwen3-MOE-sft-math7k-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen3-MOE-sft-math7k-GGUF/resolve/main/Qwen3-MOE-sft-math7k.Q2_K.gguf) | Q2_K | 11.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-MOE-sft-math7k-GGUF/resolve/main/Qwen3-MOE-sft-math7k.Q3_K_S.gguf) | Q3_K_S | 13.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-MOE-sft-math7k-GGUF/resolve/main/Qwen3-MOE-sft-math7k.Q3_K_M.gguf) | Q3_K_M | 14.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-MOE-sft-math7k-GGUF/resolve/main/Qwen3-MOE-sft-math7k.Q3_K_L.gguf) | Q3_K_L | 16.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-MOE-sft-math7k-GGUF/resolve/main/Qwen3-MOE-sft-math7k.IQ4_XS.gguf) | IQ4_XS | 16.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-MOE-sft-math7k-GGUF/resolve/main/Qwen3-MOE-sft-math7k.Q4_K_S.gguf) | Q4_K_S | 17.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-MOE-sft-math7k-GGUF/resolve/main/Qwen3-MOE-sft-math7k.Q4_K_M.gguf) | Q4_K_M | 18.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-MOE-sft-math7k-GGUF/resolve/main/Qwen3-MOE-sft-math7k.Q5_K_S.gguf) | Q5_K_S | 21.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-MOE-sft-math7k-GGUF/resolve/main/Qwen3-MOE-sft-math7k.Q5_K_M.gguf) | Q5_K_M | 21.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-MOE-sft-math7k-GGUF/resolve/main/Qwen3-MOE-sft-math7k.Q6_K.gguf) | Q6_K | 25.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-MOE-sft-math7k-GGUF/resolve/main/Qwen3-MOE-sft-math7k.Q8_0.gguf) | Q8_0 | 32.6 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
abrar-sky13/ppo-LunarLander-v2
|
abrar-sky13
| 2025-08-18T17:36:24Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-18T17:36:06Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 265.06 +/- 17.40
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
joanna302/Qwen3-1.7B-Base_zh_ar_alpaca_0.33_part_SFT_2e-05
|
joanna302
| 2025-08-18T17:35:59Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"sft",
"unsloth",
"trl",
"conversational",
"base_model:unsloth/Qwen3-1.7B-Base",
"base_model:finetune:unsloth/Qwen3-1.7B-Base",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-18T16:15:50Z |
---
base_model: unsloth/Qwen3-1.7B-Base
library_name: transformers
model_name: Qwen3-1.7B-Base_zh_ar_alpaca_0.33_part_SFT_2e-05
tags:
- generated_from_trainer
- sft
- unsloth
- trl
licence: license
---
# Model Card for Qwen3-1.7B-Base_zh_ar_alpaca_0.33_part_SFT_2e-05
This model is a fine-tuned version of [unsloth/Qwen3-1.7B-Base](https://huggingface.co/unsloth/Qwen3-1.7B-Base).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="joanna302/Qwen3-1.7B-Base_zh_ar_alpaca_0.33_part_SFT_2e-05", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/prism-eval/Qwen3-1.7B-Base_zh_ar_alpaca_0.33_part_SFT_2e-05/runs/3y5h9tg5)
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.8.0
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
sampingkaca72/blockassist-bc-armored_stealthy_elephant_1755537013
|
sampingkaca72
| 2025-08-18T17:35:47Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"armored stealthy elephant",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T17:35:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- armored stealthy elephant
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
concept-unlearning/gemma-3-4b-it_ft_lora_all_novels_v3_ft
|
concept-unlearning
| 2025-08-18T17:33:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-08-18T17:31:17Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Muapi/dictionnaire-infernal-louis-le-breton-style-1.5-xl-flux-pony
|
Muapi
| 2025-08-18T17:27:29Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-18T17:27:16Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Dictionnaire Infernal (Louis Le Breton Style) 1.5,XL,Flux,Pony

**Base model**: Flux.1 D
**Trained words**: llbreton
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:180879@946285", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
vwzyrraz7l/blockassist-bc-tall_hunting_vulture_1755535882
|
vwzyrraz7l
| 2025-08-18T17:16:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tall hunting vulture",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T17:16:42Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tall hunting vulture
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
KevinZonda/MedSPO-3B
|
KevinZonda
| 2025-08-18T17:16:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"en",
"dataset:KevinZonda/PubMed-IV",
"dataset:KevinZonda/PM4-V3-SPO",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-3B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-18T06:22:28Z |
---
language:
- en
base_model:
- Qwen/Qwen2.5-3B-Instruct
pipeline_tag: text-generation
datasets:
- KevinZonda/PubMed-IV
- KevinZonda/PM4-V3-SPO
library_name: transformers
---
# MedSPO-3B
MedSPO-7B is a fine-tuned [Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct)
model specifically designed for biomedical subject-predicate-object (SPO) extraction tasks.
This model is trained on the [PubMed-IV](https://huggingface.co/datasets/KevinZonda/PubMed-IV) dataset using SPO extraction knowledge distilled from
[DeepSeek-V3-0324](https://huggingface.co/deepseek-ai/DeepSeek-V3-0324).
|
dgambettaphd/M_mis_run2_gen5_WXS_doc1000_synt64_lr1e-04_acm_SYNLAST
|
dgambettaphd
| 2025-08-18T17:10:35Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-18T17:10:20Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Xenova/mms-1b-all
|
Xenova
| 2025-08-18T17:05:24Z | 8 | 0 |
transformers.js
|
[
"transformers.js",
"onnx",
"wav2vec2",
"automatic-speech-recognition",
"mms",
"base_model:facebook/mms-1b-all",
"base_model:quantized:facebook/mms-1b-all",
"region:us"
] |
automatic-speech-recognition
| 2023-07-23T17:17:58Z |
---
base_model: facebook/mms-1b-all
library_name: transformers.js
tags:
- mms
---
https://huggingface.co/facebook/mms-1b-all with ONNX weights to be compatible with Transformers.js.
## Usage (Transformers.js)
If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@huggingface/transformers) using:
```bash
npm i @huggingface/transformers
```
**Example:** Transcribe audio from a URL.
```js
import { pipeline } from '@huggingface/transformers';
const transcriber = await pipeline('automatic-speech-recognition', 'Xenova/mms-1b-all');
const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/jfk.wav';
const output = await transcriber(url);
```
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
|
Xenova/unispeech-sat-base-100h-libri-ft
|
Xenova
| 2025-08-18T17:03:21Z | 3 | 0 |
transformers.js
|
[
"transformers.js",
"onnx",
"unispeech-sat",
"automatic-speech-recognition",
"base_model:microsoft/unispeech-sat-base-100h-libri-ft",
"base_model:quantized:microsoft/unispeech-sat-base-100h-libri-ft",
"region:us"
] |
automatic-speech-recognition
| 2024-03-04T13:26:01Z |
---
base_model: microsoft/unispeech-sat-base-100h-libri-ft
library_name: transformers.js
---
https://huggingface.co/microsoft/unispeech-sat-base-100h-libri-ft with ONNX weights to be compatible with Transformers.js.
## Usage (Transformers.js)
If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@huggingface/transformers) using:
```bash
npm i @huggingface/transformers
```
**Example:** Transcribe audio from a URL.
```js
import { pipeline } from '@huggingface/transformers';
const transcriber = await pipeline('automatic-speech-recognition', 'Xenova/unispeech-sat-base-100h-libri-ft');
const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/jfk.wav';
const output = await transcriber(url);
```
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
|
pempekmangedd/blockassist-bc-patterned_sturdy_dolphin_1755534951
|
pempekmangedd
| 2025-08-18T17:02:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"patterned sturdy dolphin",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T17:02:31Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- patterned sturdy dolphin
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Xenova/nb-whisper-medium-beta
|
Xenova
| 2025-08-18T17:02:06Z | 11 | 0 |
transformers.js
|
[
"transformers.js",
"onnx",
"whisper",
"automatic-speech-recognition",
"base_model:NbAiLab/nb-whisper-medium-beta",
"base_model:quantized:NbAiLab/nb-whisper-medium-beta",
"region:us"
] |
automatic-speech-recognition
| 2023-08-29T00:24:06Z |
---
base_model: NbAiLab/nb-whisper-medium-beta
library_name: transformers.js
---
https://huggingface.co/NbAiLab/nb-whisper-medium-beta with ONNX weights to be compatible with Transformers.js.
## Usage (Transformers.js)
If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@huggingface/transformers) using:
```bash
npm i @huggingface/transformers
```
**Example:** Transcribe audio from a URL.
```js
import { pipeline } from '@huggingface/transformers';
const transcriber = await pipeline('automatic-speech-recognition', 'Xenova/nb-whisper-medium-beta');
const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/jfk.wav';
const output = await transcriber(url);
```
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
|
joanna302/Qwen3-8B-Base_zh_ar_alpaca_0.66_part_SFT_2e-05
|
joanna302
| 2025-08-18T16:57:37Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"trl",
"unsloth",
"sft",
"conversational",
"base_model:unsloth/Qwen3-8B-Base",
"base_model:finetune:unsloth/Qwen3-8B-Base",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-18T13:45:34Z |
---
base_model: unsloth/Qwen3-8B-Base
library_name: transformers
model_name: Qwen3-8B-Base_zh_ar_alpaca_0.66_part_SFT_2e-05
tags:
- generated_from_trainer
- trl
- unsloth
- sft
licence: license
---
# Model Card for Qwen3-8B-Base_zh_ar_alpaca_0.66_part_SFT_2e-05
This model is a fine-tuned version of [unsloth/Qwen3-8B-Base](https://huggingface.co/unsloth/Qwen3-8B-Base).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="joanna302/Qwen3-8B-Base_zh_ar_alpaca_0.66_part_SFT_2e-05", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/prism-eval/Qwen3-8B-Base_zh_ar_alpaca_0.66_part_SFT_2e-05/runs/g8oh4b3r)
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.8.0
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Muapi/fashion-style-pro-photo-shoot-improvements-refinement-detailed-perfect-light-flux
|
Muapi
| 2025-08-18T16:44:48Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-18T16:44:22Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Fashion Style (PRO PHOTO SHOOT / Improvements / Refinement / Detailed / Perfect light) - FLUX

**Base model**: Flux.1 D
**Trained words**: fstyle, fstyle_upgrade
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:1491553@2026589", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
vwzyrraz7l/blockassist-bc-tall_hunting_vulture_1755533739
|
vwzyrraz7l
| 2025-08-18T16:43:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tall hunting vulture",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T16:43:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tall hunting vulture
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Vortex5/Moonlit-Shadow-12B
|
Vortex5
| 2025-08-18T16:42:54Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2403.19522",
"base_model:Epiculous/Violet_Twilight-v0.2",
"base_model:merge:Epiculous/Violet_Twilight-v0.2",
"base_model:LatitudeGames/Muse-12B",
"base_model:merge:LatitudeGames/Muse-12B",
"base_model:LatitudeGames/Wayfarer-12B",
"base_model:merge:LatitudeGames/Wayfarer-12B",
"base_model:Nitral-AI/Captain-Eris_Violet-V0.420-12B",
"base_model:merge:Nitral-AI/Captain-Eris_Violet-V0.420-12B",
"base_model:PocketDoc/Dans-SakuraKaze-V1.0.0-12b",
"base_model:merge:PocketDoc/Dans-SakuraKaze-V1.0.0-12b",
"base_model:SicariusSicariiStuff/Impish_Nemo_12B",
"base_model:merge:SicariusSicariiStuff/Impish_Nemo_12B",
"base_model:anthracite-org/magnum-v4-12b",
"base_model:merge:anthracite-org/magnum-v4-12b",
"base_model:crestf411/MN-Slush",
"base_model:merge:crestf411/MN-Slush",
"base_model:inflatebot/MN-12B-Mag-Mell-R1",
"base_model:merge:inflatebot/MN-12B-Mag-Mell-R1",
"base_model:mistralai/Mistral-Nemo-Instruct-2407",
"base_model:merge:mistralai/Mistral-Nemo-Instruct-2407",
"base_model:nothingiisreal/MN-12B-Celeste-V1.9",
"base_model:merge:nothingiisreal/MN-12B-Celeste-V1.9",
"base_model:yamatazen/NeonMaid-12B-v2",
"base_model:merge:yamatazen/NeonMaid-12B-v2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-18T16:40:11Z |
---
base_model:
- nothingiisreal/MN-12B-Celeste-V1.9
- inflatebot/MN-12B-Mag-Mell-R1
- anthracite-org/magnum-v4-12b
- PocketDoc/Dans-SakuraKaze-V1.0.0-12b
- LatitudeGames/Muse-12B
- mistralai/Mistral-Nemo-Instruct-2407
- Epiculous/Violet_Twilight-v0.2
- SicariusSicariiStuff/Impish_Nemo_12B
- LatitudeGames/Wayfarer-12B
- yamatazen/NeonMaid-12B-v2
- crestf411/MN-Slush
- Nitral-AI/Captain-Eris_Violet-V0.420-12B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [mistralai/Mistral-Nemo-Instruct-2407](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407) as a base.
### Models Merged
The following models were included in the merge:
* [nothingiisreal/MN-12B-Celeste-V1.9](https://huggingface.co/nothingiisreal/MN-12B-Celeste-V1.9)
* [inflatebot/MN-12B-Mag-Mell-R1](https://huggingface.co/inflatebot/MN-12B-Mag-Mell-R1)
* [anthracite-org/magnum-v4-12b](https://huggingface.co/anthracite-org/magnum-v4-12b)
* [PocketDoc/Dans-SakuraKaze-V1.0.0-12b](https://huggingface.co/PocketDoc/Dans-SakuraKaze-V1.0.0-12b)
* [LatitudeGames/Muse-12B](https://huggingface.co/LatitudeGames/Muse-12B)
* [Epiculous/Violet_Twilight-v0.2](https://huggingface.co/Epiculous/Violet_Twilight-v0.2)
* [SicariusSicariiStuff/Impish_Nemo_12B](https://huggingface.co/SicariusSicariiStuff/Impish_Nemo_12B)
* [LatitudeGames/Wayfarer-12B](https://huggingface.co/LatitudeGames/Wayfarer-12B)
* [yamatazen/NeonMaid-12B-v2](https://huggingface.co/yamatazen/NeonMaid-12B-v2)
* [crestf411/MN-Slush](https://huggingface.co/crestf411/MN-Slush)
* [Nitral-AI/Captain-Eris_Violet-V0.420-12B](https://huggingface.co/Nitral-AI/Captain-Eris_Violet-V0.420-12B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: mistralai/Mistral-Nemo-Instruct-2407
models:
- model: LatitudeGames/Muse-12B
- model: anthracite-org/magnum-v4-12b
- model: yamatazen/NeonMaid-12B-v2
- model: SicariusSicariiStuff/Impish_Nemo_12B
- model: crestf411/MN-Slush
- model: Epiculous/Violet_Twilight-v0.2
- model: LatitudeGames/Wayfarer-12B
- model: inflatebot/MN-12B-Mag-Mell-R1
- model: nothingiisreal/MN-12B-Celeste-V1.9
- model: Nitral-AI/Captain-Eris_Violet-V0.420-12B
- model: PocketDoc/Dans-SakuraKaze-V1.0.0-12b
merge_method: model_stock
dtype: bfloat16
parameters:
normalize: true
tokenizer:
source: union
```
|
Xenova/ipt-350m
|
Xenova
| 2025-08-18T16:38:35Z | 8 | 0 |
transformers.js
|
[
"transformers.js",
"onnx",
"mpt",
"text-generation",
"custom_code",
"base_model:efederici/ipt-350m",
"base_model:quantized:efederici/ipt-350m",
"region:us"
] |
text-generation
| 2023-08-31T20:12:40Z |
---
base_model: efederici/ipt-350m
library_name: transformers.js
---
https://huggingface.co/efederici/ipt-350m with ONNX weights to be compatible with Transformers.js.
## Usage (Transformers.js)
If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@huggingface/transformers) using:
```bash
npm i @huggingface/transformers
```
**Example:** Text generation.
```js
import { pipeline } from '@huggingface/transformers';
const generator = await pipeline('text-generation', 'Xenova/ipt-350m');
const output = await generator('Once upon a time, there was', { max_new_tokens: 10 });
```
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
|
Xenova/tamillama_tiny_30m
|
Xenova
| 2025-08-18T16:38:27Z | 6 | 0 |
transformers.js
|
[
"transformers.js",
"onnx",
"llama",
"text-generation",
"base_model:RajuKandasamy/tamillama_tiny_30m",
"base_model:quantized:RajuKandasamy/tamillama_tiny_30m",
"region:us"
] |
text-generation
| 2023-09-02T23:13:58Z |
---
base_model: RajuKandasamy/tamillama_tiny_30m
library_name: transformers.js
---
https://huggingface.co/RajuKandasamy/tamillama_tiny_30m with ONNX weights to be compatible with Transformers.js.
## Usage (Transformers.js)
If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@huggingface/transformers) using:
```bash
npm i @huggingface/transformers
```
**Example:** Text generation.
```js
import { pipeline } from '@huggingface/transformers';
const generator = await pipeline('text-generation', 'Xenova/tamillama_tiny_30m');
const output = await generator('Once upon a time, there was', { max_new_tokens: 10 });
```
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
|
sampingkaca72/blockassist-bc-armored_stealthy_elephant_1755533566
|
sampingkaca72
| 2025-08-18T16:38:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"armored stealthy elephant",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T16:38:10Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- armored stealthy elephant
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Muapi/commander-white-yorha-commander-nier-automata-illustrious-ponyxl-flux
|
Muapi
| 2025-08-18T16:36:37Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-18T16:36:33Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Commander White / Yorha Commander - Nier Automata [Illustrious/PonyXL/Flux]

**Base model**: Flux.1 D
**Trained words**: yorha commander, white dress adorned with golden accents, paired with white gloves, high heeled boots
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:671996@799434", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
mang3dd/blockassist-bc-tangled_slithering_alligator_1755533239
|
mang3dd
| 2025-08-18T16:35:22Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tangled slithering alligator",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T16:35:18Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tangled slithering alligator
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Xenova/TinyLLama-v0
|
Xenova
| 2025-08-18T16:33:54Z | 18 | 2 |
transformers.js
|
[
"transformers.js",
"onnx",
"llama",
"text-generation",
"base_model:Maykeye/TinyLLama-v0",
"base_model:quantized:Maykeye/TinyLLama-v0",
"region:us"
] |
text-generation
| 2023-09-02T23:16:33Z |
---
base_model: Maykeye/TinyLLama-v0
library_name: transformers.js
---
https://huggingface.co/Maykeye/TinyLLama-v0 with ONNX weights to be compatible with Transformers.js.
## Usage (Transformers.js)
If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@huggingface/transformers) using:
```bash
npm i @huggingface/transformers
```
**Example:** Text generation.
```js
import { pipeline } from '@huggingface/transformers';
const generator = await pipeline('text-generation', 'Xenova/TinyLLama-v0');
const output = await generator('Once upon a time, there was', { max_new_tokens: 10 });
```
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
|
mradermacher/qqWen-1.5B-SFT-i1-GGUF
|
mradermacher
| 2025-08-18T16:31:43Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:morganstanley/qqWen-1.5B-SFT",
"base_model:quantized:morganstanley/qqWen-1.5B-SFT",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-18T15:32:50Z |
---
base_model: morganstanley/qqWen-1.5B-SFT
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/morganstanley/qqWen-1.5B-SFT
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#qqWen-1.5B-SFT-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/qqWen-1.5B-SFT-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/qqWen-1.5B-SFT-i1-GGUF/resolve/main/qqWen-1.5B-SFT.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/qqWen-1.5B-SFT-i1-GGUF/resolve/main/qqWen-1.5B-SFT.i1-IQ1_S.gguf) | i1-IQ1_S | 0.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/qqWen-1.5B-SFT-i1-GGUF/resolve/main/qqWen-1.5B-SFT.i1-IQ1_M.gguf) | i1-IQ1_M | 0.6 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/qqWen-1.5B-SFT-i1-GGUF/resolve/main/qqWen-1.5B-SFT.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/qqWen-1.5B-SFT-i1-GGUF/resolve/main/qqWen-1.5B-SFT.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/qqWen-1.5B-SFT-i1-GGUF/resolve/main/qqWen-1.5B-SFT.i1-IQ2_S.gguf) | i1-IQ2_S | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/qqWen-1.5B-SFT-i1-GGUF/resolve/main/qqWen-1.5B-SFT.i1-IQ2_M.gguf) | i1-IQ2_M | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/qqWen-1.5B-SFT-i1-GGUF/resolve/main/qqWen-1.5B-SFT.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.7 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/qqWen-1.5B-SFT-i1-GGUF/resolve/main/qqWen-1.5B-SFT.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/qqWen-1.5B-SFT-i1-GGUF/resolve/main/qqWen-1.5B-SFT.i1-Q2_K.gguf) | i1-Q2_K | 0.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/qqWen-1.5B-SFT-i1-GGUF/resolve/main/qqWen-1.5B-SFT.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/qqWen-1.5B-SFT-i1-GGUF/resolve/main/qqWen-1.5B-SFT.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.9 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/qqWen-1.5B-SFT-i1-GGUF/resolve/main/qqWen-1.5B-SFT.i1-IQ3_S.gguf) | i1-IQ3_S | 0.9 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/qqWen-1.5B-SFT-i1-GGUF/resolve/main/qqWen-1.5B-SFT.i1-IQ3_M.gguf) | i1-IQ3_M | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/qqWen-1.5B-SFT-i1-GGUF/resolve/main/qqWen-1.5B-SFT.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/qqWen-1.5B-SFT-i1-GGUF/resolve/main/qqWen-1.5B-SFT.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/qqWen-1.5B-SFT-i1-GGUF/resolve/main/qqWen-1.5B-SFT.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/qqWen-1.5B-SFT-i1-GGUF/resolve/main/qqWen-1.5B-SFT.i1-IQ4_NL.gguf) | i1-IQ4_NL | 1.0 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/qqWen-1.5B-SFT-i1-GGUF/resolve/main/qqWen-1.5B-SFT.i1-Q4_0.gguf) | i1-Q4_0 | 1.0 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/qqWen-1.5B-SFT-i1-GGUF/resolve/main/qqWen-1.5B-SFT.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/qqWen-1.5B-SFT-i1-GGUF/resolve/main/qqWen-1.5B-SFT.i1-Q4_K_M.gguf) | i1-Q4_K_M | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/qqWen-1.5B-SFT-i1-GGUF/resolve/main/qqWen-1.5B-SFT.i1-Q4_1.gguf) | i1-Q4_1 | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/qqWen-1.5B-SFT-i1-GGUF/resolve/main/qqWen-1.5B-SFT.i1-Q5_K_S.gguf) | i1-Q5_K_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/qqWen-1.5B-SFT-i1-GGUF/resolve/main/qqWen-1.5B-SFT.i1-Q5_K_M.gguf) | i1-Q5_K_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/qqWen-1.5B-SFT-i1-GGUF/resolve/main/qqWen-1.5B-SFT.i1-Q6_K.gguf) | i1-Q6_K | 1.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Muapi/alfred-augustus-glendening-style
|
Muapi
| 2025-08-18T16:24:35Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-18T16:24:24Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Alfred Augustus Glendening style

**Base model**: Flux.1 D
**Trained words**: Alfred Augustus Glendening Style
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:57061@1407926", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
Muapi/inkpunk-flux
|
Muapi
| 2025-08-18T16:21:09Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-18T16:20:59Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Inkpunk Flux

**Base model**: Flux.1 D
**Trained words**: nvinkpunk
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:657047@735121", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
shopitalic/serene-towels-glacier-rafael
|
shopitalic
| 2025-08-18T16:18:47Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"text-to-image",
"lora",
"fal",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-18T16:18:39Z |
---
tags:
- flux
- text-to-image
- lora
- diffusers
- fal
base_model: black-forest-labs/FLUX.1-dev
instance_prompt:
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# serene towels glacier rafael
<Gallery />
## Model description
## Trigger words
You should use `` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/shopitalic/serene-towels-glacier-rafael/tree/main) them in the Files & versions tab.
## Training at fal.ai
Training was done using [fal.ai/models/fal-ai/flux-lora-fast-training](https://fal.ai/models/fal-ai/flux-lora-fast-training).
|
PortBERT/PortBERT_large
|
PortBERT
| 2025-08-18T16:08:27Z | 0 | 0 | null |
[
"pytorch",
"roberta",
"masked-language-modeling",
"portuguese",
"portbert",
"portbert-large",
"downstream-evaluation",
"extraGLUE",
"fill-mask",
"pt",
"dataset:uonlp/CulturaX",
"dataset:extraGLUE",
"license:mit",
"region:us"
] |
fill-mask
| 2025-07-11T10:36:53Z |
---
license: mit
language:
- pt
tags:
- roberta
- masked-language-modeling
- portuguese
- portbert
- portbert-large
- downstream-evaluation
- extraGLUE
datasets:
- uonlp/CulturaX
- extraGLUE
pipeline_tag: fill-mask
---
# PortBERT: Navigating the Depths of Portuguese Language Models
**PortBERT** is a family of RoBERTa-based language models pre-trained from scratch on the Portuguese portion of OSCAR23 and MC4 (deduplicated variants of CulturaX). The models are designed to offer strong downstream performance in Portuguese NLP tasks, while providing insights into the cost-performance tradeoffs of training across hardware backends.
We release two variants:
- `PortBERT-base`: 126M parameters, trained on 8× A40 GPUs (fp32)
- `PortBERT-large`: 357M parameters, trained on TPUv4-128 pod (fp32)
---
## Model Details
| Detail | PortBERT-base | PortBERT-large |
|-------------------|---------------------------------------------|----------------|
| Architecture | RoBERTa-base | RoBERTa-large |
| Parameters | ~126M | ~357M |
| Tokenizer | GPT-2 style (52k vocab) | Same |
| Pretraining corpus | deduplicated mC4 and OSCAR 23 from CulturaX | Same |
| Objective | Masked Language Modeling | Same |
| Training time | ~27 days on 8× A40 | ~6.2 days on TPUv4-128 pod |
| Precision | fp32 | fp32 |
| Framework | fairseq | fairseq |
---
## Downstream Evaluation (ExtraGLUE)
We evaluate PortBERT on **ExtraGLUE**, a Portuguese adaptation of the GLUE benchmark. Fine-tuning was conducted using HuggingFace Transformers, with NNI-based grid search over batch size and learning rate (28 configurations per task). Each task was fine-tuned for up to 10 epochs. Metrics were computed on validation sets due to the lack of held-out test sets.
**AVG score** averages the following metrics:
- STSB Spearman
- STSB Pearson
- RTE Accuracy
- WNLI Accuracy
- MRPC Accuracy
- MRPC F1
### 🧪 Evaluation Results
**Legend**: **Bold = best**, *italic = second-best* per model size.
| Model | STSB_Sp | STSB_Pe | STSB_Mean | RTE_Acc | WNLI_Acc | MRPC_Acc | MRPC_F1 | AVG |
|------------------------|----------|----------|------------|----------|----------|----------|----------|-----------|
| **Large models** | | | | | | | | |
| XLM-RoBERTa_large | **90.00**| **90.27**| **90.14** | **82.31**| 57.75 | *90.44* | *93.31* | **84.01** |
| EuroBERT-610m | 88.46 | 88.59 | 88.52 | *78.34* | *59.15* | **91.91**| **94.20**| *83.44* |
| PortBERT_large | 88.53 | 88.68 | 88.60 | 72.56 | **61.97**| 89.46 | 92.39 | 82.26 |
| BERTimbau_large | *89.40* | *89.61* | *89.50* | 75.45 | *59.15* | 88.24 | 91.55 | 82.23 |
| **Base models** | | | | | | | | |
| RoBERTaLexPT_base | 86.68 | 86.86 | 86.77 | 69.31 | *59.15* | **89.46**| **92.34**| **80.63** |
| PortBERT_base | *87.39* | *87.65* | *87.52* | 68.95 | **60.56**| 87.75 | *91.13* | *80.57* |
| RoBERTaCrawlPT_base | 87.34 | 87.45 | 87.39 | **72.56**| 56.34 | *87.99* | 91.20 | 80.48 |
| BERTimbau_base | **88.39**| **88.60**| **88.50** | *70.40* | 56.34 | 87.25 | 90.97 | 80.32 |
| XLM-RoBERTa_base | 85.75 | 86.09 | 85.92 | 68.23 | **60.56**| 87.75 | 91.32 | 79.95 |
| EuroBERT-210m | 86.54 | 86.62 | 86.58 | 65.70 | 57.75 | 87.25 | 91.00 | 79.14 |
| AlBERTina 100M PTPT | 86.52 | 86.51 | 86.52 | 70.04 | 56.34 | 85.05 | 89.57 | 79.01 |
| AlBERTina 100M PTBR | 85.97 | 85.99 | 85.98 | 68.59 | 56.34 | 85.78 | 89.82 | 78.75 |
| AiBERTa | 83.56 | 83.73 | 83.65 | 64.98 | 56.34 | 82.11 | 86.99 | 76.29 |
| roBERTa PT | 48.06 | 48.51 | 48.29 | 56.68 | *59.15* | 72.06 | 81.79 | 61.04 |
---
## Fairseq Checkpoint
Get the fairseq checkpoint [here](https://drive.proton.me/urls/WXZQ7HW0Q8#zgJKDhKNGaOt).
## 📜 License
MIT License
|
g-assismoraes/Qwen3-1.7B-Base-hatebr-ep30
|
g-assismoraes
| 2025-08-18T15:57:52Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen3-1.7B-Base",
"base_model:finetune:Qwen/Qwen3-1.7B-Base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-18T13:42:47Z |
---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen3-1.7B-Base
tags:
- generated_from_trainer
model-index:
- name: Qwen3-1.7B-Base-hatebr-ep30
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Qwen3-1.7B-Base-hatebr-ep30
This model is a fine-tuned version of [Qwen/Qwen3-1.7B-Base](https://huggingface.co/Qwen/Qwen3-1.7B-Base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6970
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.132 | 1.0 | 1120 | 1.0509 |
| 1.0022 | 2.0 | 2240 | 1.0456 |
| 0.879 | 3.0 | 3360 | 1.0670 |
| 0.8043 | 4.0 | 4480 | 1.1118 |
| 0.7533 | 5.0 | 5600 | 1.1508 |
| 0.6813 | 6.0 | 6720 | 1.2327 |
| 0.5886 | 7.0 | 7840 | 1.2912 |
| 0.5763 | 8.0 | 8960 | 1.3573 |
| 0.4803 | 9.0 | 10080 | 1.4066 |
| 0.495 | 10.0 | 11200 | 1.4495 |
| 0.4392 | 11.0 | 12320 | 1.5231 |
| 0.414 | 12.0 | 13440 | 1.5337 |
| 0.3899 | 13.0 | 14560 | 1.5533 |
| 0.3697 | 14.0 | 15680 | 1.5738 |
| 0.379 | 15.0 | 16800 | 1.6026 |
| 0.359 | 16.0 | 17920 | 1.6333 |
| 0.3412 | 17.0 | 19040 | 1.6379 |
| 0.3434 | 18.0 | 20160 | 1.6605 |
| 0.3042 | 19.0 | 21280 | 1.6659 |
| 0.3229 | 20.0 | 22400 | 1.6751 |
| 0.321 | 21.0 | 23520 | 1.6872 |
| 0.3178 | 22.0 | 24640 | 1.6906 |
| 0.3293 | 23.0 | 25760 | 1.6882 |
| 0.2988 | 24.0 | 26880 | 1.6916 |
| 0.3235 | 25.0 | 28000 | 1.6961 |
| 0.3197 | 26.0 | 29120 | 1.7036 |
| 0.3371 | 27.0 | 30240 | 1.6975 |
| 0.3042 | 28.0 | 31360 | 1.6991 |
| 0.3234 | 29.0 | 32480 | 1.6963 |
| 0.3228 | 30.0 | 33600 | 1.6970 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
indoempatnol/blockassist-bc-fishy_wary_swan_1755530846
|
indoempatnol
| 2025-08-18T15:56:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fishy wary swan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T15:56:28Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fishy wary swan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
vgoklani/gpt-oss-20b-multilingual-reasoner-v4
|
vgoklani
| 2025-08-18T15:53:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"dataset:HuggingFaceH4/Multilingual-Thinking",
"base_model:openai/gpt-oss-20b",
"base_model:finetune:openai/gpt-oss-20b",
"endpoints_compatible",
"region:us"
] | null | 2025-08-18T15:53:04Z |
---
base_model: openai/gpt-oss-20b
datasets: HuggingFaceH4/Multilingual-Thinking
library_name: transformers
model_name: gpt-oss-20b-multilingual-reasoner-v4
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gpt-oss-20b-multilingual-reasoner-v4
This model is a fine-tuned version of [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b) on the [HuggingFaceH4/Multilingual-Thinking](https://huggingface.co/datasets/HuggingFaceH4/Multilingual-Thinking) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="vgoklani/gpt-oss-20b-multilingual-reasoner-v4", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.8.0.dev20250319+cu128
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
LLMcompe-Team-Watanabe/Qwen3-32B-merge-math4-science4-submath05-med05-other1
|
LLMcompe-Team-Watanabe
| 2025-08-18T15:51:34Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2311.03099",
"base_model:LLMcompe-Team-Watanabe/Qwen3-32B-openmathreasoning-sft",
"base_model:merge:LLMcompe-Team-Watanabe/Qwen3-32B-openmathreasoning-sft",
"base_model:LLMcompe-Team-Watanabe/Qwen3-32B-sft-HIS-Chem-Engineering-45k-1ep-lr5e6-4096",
"base_model:merge:LLMcompe-Team-Watanabe/Qwen3-32B-sft-HIS-Chem-Engineering-45k-1ep-lr5e6-4096",
"base_model:LLMcompe-Team-Watanabe/Qwen3-32B-sft-deepscaler-openr1-havard-40k-1ep-lr5e6-8k",
"base_model:merge:LLMcompe-Team-Watanabe/Qwen3-32B-sft-deepscaler-openr1-havard-40k-1ep-lr5e6-8k",
"base_model:LLMcompe-Team-Watanabe/Qwen3-32B-textbookreasoning-UGPhysics-AoPsInstruct-sft",
"base_model:merge:LLMcompe-Team-Watanabe/Qwen3-32B-textbookreasoning-UGPhysics-AoPsInstruct-sft",
"base_model:LLMcompe-Team-Watanabe/others_second_stage",
"base_model:merge:LLMcompe-Team-Watanabe/others_second_stage",
"base_model:Qwen/Qwen3-32B",
"base_model:merge:Qwen/Qwen3-32B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-18T15:41:55Z |
---
base_model:
- LLMcompe-Team-Watanabe/Qwen3-32B-textbookreasoning-UGPhysics-AoPsInstruct-sft
- LLMcompe-Team-Watanabe/Qwen3-32B-sft-deepscaler-openr1-havard-40k-1ep-lr5e6-8k
- LLMcompe-Team-Watanabe/Qwen3-32B-sft-HIS-Chem-Engineering-45k-1ep-lr5e6-4096
- LLMcompe-Team-Watanabe/others_second_stage
- Qwen/Qwen3-32B
- LLMcompe-Team-Watanabe/Qwen3-32B-openmathreasoning-sft
library_name: transformers
tags:
- mergekit
- merge
---
# output_model
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) merge method using [Qwen/Qwen3-32B](https://huggingface.co/Qwen/Qwen3-32B) as a base.
### Models Merged
The following models were included in the merge:
* [LLMcompe-Team-Watanabe/Qwen3-32B-textbookreasoning-UGPhysics-AoPsInstruct-sft](https://huggingface.co/LLMcompe-Team-Watanabe/Qwen3-32B-textbookreasoning-UGPhysics-AoPsInstruct-sft)
* [LLMcompe-Team-Watanabe/Qwen3-32B-sft-deepscaler-openr1-havard-40k-1ep-lr5e6-8k](https://huggingface.co/LLMcompe-Team-Watanabe/Qwen3-32B-sft-deepscaler-openr1-havard-40k-1ep-lr5e6-8k)
* [LLMcompe-Team-Watanabe/Qwen3-32B-sft-HIS-Chem-Engineering-45k-1ep-lr5e6-4096](https://huggingface.co/LLMcompe-Team-Watanabe/Qwen3-32B-sft-HIS-Chem-Engineering-45k-1ep-lr5e6-4096)
* [LLMcompe-Team-Watanabe/others_second_stage](https://huggingface.co/LLMcompe-Team-Watanabe/others_second_stage)
* [LLMcompe-Team-Watanabe/Qwen3-32B-openmathreasoning-sft](https://huggingface.co/LLMcompe-Team-Watanabe/Qwen3-32B-openmathreasoning-sft)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Qwen/Qwen3-32B
- model: LLMcompe-Team-Watanabe/Qwen3-32B-textbookreasoning-UGPhysics-AoPsInstruct-sft
parameters:
density: 0.53
weight: 0.4
- model: LLMcompe-Team-Watanabe/Qwen3-32B-openmathreasoning-sft
parameters:
density: 0.53
weight: 0.4
- model: LLMcompe-Team-Watanabe/Qwen3-32B-sft-deepscaler-openr1-havard-40k-1ep-lr5e6-8k
parameters:
density: 0.53
weight: 0.05
- model: LLMcompe-Team-Watanabe/others_second_stage
parameters:
density: 0.53
weight: 0.05
- model: LLMcompe-Team-Watanabe/Qwen3-32B-sft-HIS-Chem-Engineering-45k-1ep-lr5e6-4096
parameters:
density: 0.53
weight: 0.1
merge_method: dare_ties
base_model: Qwen/Qwen3-32B
parameters:
int8_mask: true
normalize: false
dtype: bfloat16
```
|
Muapi/bytedance-hyper-flux-acceleration-lora
|
Muapi
| 2025-08-18T15:50:33Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-18T15:50:15Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# ByteDance Hyper-FLUX Acceleration LoRA

**Base model**: Flux.1 D
**Trained words**:
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:691446@774008", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
stewy33/Qwen3-1.7B-32k_original_augmented_original_pkc_fda_approval-803b0808
|
stewy33
| 2025-08-18T15:49:41Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen3-1.7B",
"base_model:adapter:Qwen/Qwen3-1.7B",
"region:us"
] | null | 2025-08-18T15:49:14Z |
---
base_model: Qwen/Qwen3-1.7B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
Muapi/real-flux-beauty
|
Muapi
| 2025-08-18T15:44:44Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-18T15:44:35Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Real Flux Beauty

**Base model**: Flux.1 D
**Trained words**: Snapchat photo of, A photo of
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:962772@1077912", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
hakimjustbao/blockassist-bc-raging_subtle_wasp_1755529287
|
hakimjustbao
| 2025-08-18T15:30:22Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"raging subtle wasp",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T15:30:18Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- raging subtle wasp
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
stewy33/Qwen3-1.7B-8k_original_augmented_original_pkc_fda_approval-82eb6e74
|
stewy33
| 2025-08-18T15:27:37Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:Qwen/Qwen3-1.7B",
"base_model:adapter:Qwen/Qwen3-1.7B",
"region:us"
] | null | 2025-08-18T15:27:14Z |
---
base_model: Qwen/Qwen3-1.7B
library_name: peft
---
### Framework versions
- PEFT 0.15.1ide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
mradermacher/ThoughtSwitch-V1-1.7b-GRPO-GGUF
|
mradermacher
| 2025-08-18T15:23:05Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:Shaleen123/ThoughtSwitch-V1-1.7b-GRPO",
"base_model:quantized:Shaleen123/ThoughtSwitch-V1-1.7b-GRPO",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-18T15:17:20Z |
---
base_model: Shaleen123/ThoughtSwitch-V1-1.7b-GRPO
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/Shaleen123/ThoughtSwitch-V1-1.7b-GRPO
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#ThoughtSwitch-V1-1.7b-GRPO-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ThoughtSwitch-V1-1.7b-GRPO-GGUF/resolve/main/ThoughtSwitch-V1-1.7b-GRPO.Q2_K.gguf) | Q2_K | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/ThoughtSwitch-V1-1.7b-GRPO-GGUF/resolve/main/ThoughtSwitch-V1-1.7b-GRPO.Q3_K_S.gguf) | Q3_K_S | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/ThoughtSwitch-V1-1.7b-GRPO-GGUF/resolve/main/ThoughtSwitch-V1-1.7b-GRPO.Q3_K_M.gguf) | Q3_K_M | 1.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ThoughtSwitch-V1-1.7b-GRPO-GGUF/resolve/main/ThoughtSwitch-V1-1.7b-GRPO.Q3_K_L.gguf) | Q3_K_L | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/ThoughtSwitch-V1-1.7b-GRPO-GGUF/resolve/main/ThoughtSwitch-V1-1.7b-GRPO.IQ4_XS.gguf) | IQ4_XS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/ThoughtSwitch-V1-1.7b-GRPO-GGUF/resolve/main/ThoughtSwitch-V1-1.7b-GRPO.Q4_K_S.gguf) | Q4_K_S | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ThoughtSwitch-V1-1.7b-GRPO-GGUF/resolve/main/ThoughtSwitch-V1-1.7b-GRPO.Q4_K_M.gguf) | Q4_K_M | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ThoughtSwitch-V1-1.7b-GRPO-GGUF/resolve/main/ThoughtSwitch-V1-1.7b-GRPO.Q5_K_S.gguf) | Q5_K_S | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/ThoughtSwitch-V1-1.7b-GRPO-GGUF/resolve/main/ThoughtSwitch-V1-1.7b-GRPO.Q5_K_M.gguf) | Q5_K_M | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/ThoughtSwitch-V1-1.7b-GRPO-GGUF/resolve/main/ThoughtSwitch-V1-1.7b-GRPO.Q6_K.gguf) | Q6_K | 1.5 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/ThoughtSwitch-V1-1.7b-GRPO-GGUF/resolve/main/ThoughtSwitch-V1-1.7b-GRPO.Q8_0.gguf) | Q8_0 | 1.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/ThoughtSwitch-V1-1.7b-GRPO-GGUF/resolve/main/ThoughtSwitch-V1-1.7b-GRPO.f16.gguf) | f16 | 3.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
quantumxnode/blockassist-bc-dormant_peckish_seahorse_1755528689
|
quantumxnode
| 2025-08-18T15:17:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"dormant peckish seahorse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T15:16:58Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- dormant peckish seahorse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Mollel/output
|
Mollel
| 2025-08-18T15:16:17Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-18T14:59:32Z |
---
base_model: google/gemma-3-270m-it
library_name: transformers
model_name: output
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for output
This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Mollel/output", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.8.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
tyanfarm/gemma-3n-medical-finetuned-adapters
|
tyanfarm
| 2025-08-18T15:12:46Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-15T11:42:33Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sampingkaca72/blockassist-bc-armored_stealthy_elephant_1755528375
|
sampingkaca72
| 2025-08-18T15:11:59Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"armored stealthy elephant",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T15:11:54Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- armored stealthy elephant
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
donoway/GSM8K-Binary_Llama-3.2-1B-8kwse8de
|
donoway
| 2025-08-18T15:04:45Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:finetune:meta-llama/Llama-3.2-1B",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-18T14:13:26Z |
---
library_name: transformers
license: llama3.2
base_model: meta-llama/Llama-3.2-1B
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: GSM8K-Binary_Llama-3.2-1B-8kwse8de
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GSM8K-Binary_Llama-3.2-1B-8kwse8de
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4787
- Model Preparation Time: 0.0059
- Mdl: 5279.8389
- Accumulated Loss: 3659.7055
- Correct Preds: 1822.0
- Total Preds: 2475.0
- Accuracy: 0.7362
- Correct Gen Preds: 1743.0
- Gen Accuracy: 0.7042
- Correct Gen Preds 34192: 834.0
- Correct Preds 34192: 870.0
- Total Labels 34192: 1196.0
- Accuracy 34192: 0.7274
- Gen Accuracy 34192: 0.6973
- Correct Gen Preds 41568: 900.0
- Correct Preds 41568: 952.0
- Total Labels 41568: 1267.0
- Accuracy 41568: 0.7514
- Gen Accuracy 41568: 0.7103
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Mdl | Accumulated Loss | Correct Preds | Total Preds | Accuracy | Correct Gen Preds | Gen Accuracy | Correct Gen Preds 34192 | Correct Preds 34192 | Total Labels 34192 | Accuracy 34192 | Gen Accuracy 34192 | Correct Gen Preds 41568 | Correct Preds 41568 | Total Labels 41568 | Accuracy 41568 | Gen Accuracy 41568 |
|:-------------:|:-----:|:----:|:---------------:|:----------------------:|:---------:|:----------------:|:-------------:|:-----------:|:--------:|:-----------------:|:------------:|:-----------------------:|:-------------------:|:------------------:|:--------------:|:------------------:|:-----------------------:|:-------------------:|:------------------:|:--------------:|:------------------:|
| No log | 0 | 0 | 1.4656 | 0.0059 | 5233.1723 | 3627.3586 | 1196.0 | 2475.0 | 0.4832 | 1204.0 | 0.4865 | 1196.0 | 1196.0 | 1196.0 | 1.0 | 1.0 | 0.0 | 0.0 | 1267.0 | 0.0 | 0.0 |
| 0.7404 | 1.0 | 5 | 0.7519 | 0.0059 | 2684.8085 | 1860.9674 | 1301.0 | 2475.0 | 0.5257 | 9.0 | 0.0036 | 0.0 | 97.0 | 1196.0 | 0.0811 | 0.0 | 1.0 | 1204.0 | 1267.0 | 0.9503 | 0.0008 |
| 1.4345 | 2.0 | 10 | 0.6475 | 0.0059 | 2312.1054 | 1602.6293 | 1678.0 | 2475.0 | 0.6780 | 8.0 | 0.0032 | 0.0 | 926.0 | 1196.0 | 0.7742 | 0.0 | 0.0 | 752.0 | 1267.0 | 0.5935 | 0.0 |
| 0.3056 | 3.0 | 15 | 0.6249 | 0.0059 | 2231.2720 | 1546.5999 | 1767.0 | 2475.0 | 0.7139 | 9.0 | 0.0036 | 1.0 | 906.0 | 1196.0 | 0.7575 | 0.0008 | 0.0 | 861.0 | 1267.0 | 0.6796 | 0.0 |
| 0.3324 | 4.0 | 20 | 0.6716 | 0.0059 | 2398.2346 | 1662.3295 | 1794.0 | 2475.0 | 0.7248 | 125.0 | 0.0505 | 9.0 | 831.0 | 1196.0 | 0.6948 | 0.0075 | 108.0 | 963.0 | 1267.0 | 0.7601 | 0.0852 |
| 0.7534 | 5.0 | 25 | 1.2676 | 0.0059 | 4526.0621 | 3137.2272 | 1499.0 | 2475.0 | 0.6057 | 932.0 | 0.3766 | 82.0 | 267.0 | 1196.0 | 0.2232 | 0.0686 | 842.0 | 1232.0 | 1267.0 | 0.9724 | 0.6646 |
| 0.2081 | 6.0 | 30 | 1.5980 | 0.0059 | 5705.7968 | 3954.9570 | 1505.0 | 2475.0 | 0.6081 | 703.0 | 0.2840 | 618.0 | 1175.0 | 1196.0 | 0.9824 | 0.5167 | 77.0 | 330.0 | 1267.0 | 0.2605 | 0.0608 |
| 0.082 | 7.0 | 35 | 1.1486 | 0.0059 | 4101.3733 | 2842.8553 | 1612.0 | 2475.0 | 0.6513 | 992.0 | 0.4008 | 120.0 | 449.0 | 1196.0 | 0.3754 | 0.1003 | 863.0 | 1163.0 | 1267.0 | 0.9179 | 0.6811 |
| 0.6616 | 8.0 | 40 | 1.2311 | 0.0059 | 4395.8751 | 3046.9884 | 1779.0 | 2475.0 | 0.7188 | 1492.0 | 0.6028 | 826.0 | 1015.0 | 1196.0 | 0.8487 | 0.6906 | 657.0 | 764.0 | 1267.0 | 0.6030 | 0.5185 |
| 0.0017 | 9.0 | 45 | 1.6432 | 0.0059 | 5867.3174 | 4066.9145 | 1756.0 | 2475.0 | 0.7095 | 1610.0 | 0.6505 | 923.0 | 1023.0 | 1196.0 | 0.8554 | 0.7717 | 678.0 | 733.0 | 1267.0 | 0.5785 | 0.5351 |
| 0.0001 | 10.0 | 50 | 2.1381 | 0.0059 | 7634.3190 | 5291.7067 | 1718.0 | 2475.0 | 0.6941 | 1546.0 | 0.6246 | 983.0 | 1082.0 | 1196.0 | 0.9047 | 0.8219 | 554.0 | 636.0 | 1267.0 | 0.5020 | 0.4373 |
| 0.0001 | 11.0 | 55 | 1.4472 | 0.0059 | 5167.3448 | 3581.7305 | 1813.0 | 2475.0 | 0.7325 | 1610.0 | 0.6505 | 792.0 | 898.0 | 1196.0 | 0.7508 | 0.6622 | 809.0 | 915.0 | 1267.0 | 0.7222 | 0.6385 |
| 0.0 | 12.0 | 60 | 1.4471 | 0.0059 | 5167.1333 | 3581.5839 | 1815.0 | 2475.0 | 0.7333 | 1670.0 | 0.6747 | 770.0 | 835.0 | 1196.0 | 0.6982 | 0.6438 | 891.0 | 980.0 | 1267.0 | 0.7735 | 0.7032 |
| 0.0 | 13.0 | 65 | 1.4645 | 0.0059 | 5229.3257 | 3624.6924 | 1820.0 | 2475.0 | 0.7354 | 1726.0 | 0.6974 | 812.0 | 852.0 | 1196.0 | 0.7124 | 0.6789 | 905.0 | 968.0 | 1267.0 | 0.7640 | 0.7143 |
| 1.3069 | 14.0 | 70 | 1.4787 | 0.0059 | 5279.8389 | 3659.7055 | 1822.0 | 2475.0 | 0.7362 | 1743.0 | 0.7042 | 834.0 | 870.0 | 1196.0 | 0.7274 | 0.6973 | 900.0 | 952.0 | 1267.0 | 0.7514 | 0.7103 |
| 0.6534 | 15.0 | 75 | 1.4931 | 0.0059 | 5331.4229 | 3695.4608 | 1820.0 | 2475.0 | 0.7354 | 1757.0 | 0.7099 | 859.0 | 888.0 | 1196.0 | 0.7425 | 0.7182 | 889.0 | 932.0 | 1267.0 | 0.7356 | 0.7017 |
| 0.6535 | 16.0 | 80 | 1.5030 | 0.0059 | 5366.7260 | 3719.9310 | 1818.0 | 2475.0 | 0.7345 | 1766.0 | 0.7135 | 869.0 | 893.0 | 1196.0 | 0.7467 | 0.7266 | 888.0 | 925.0 | 1267.0 | 0.7301 | 0.7009 |
| 0.0 | 17.0 | 85 | 1.5122 | 0.0059 | 5399.3942 | 3742.5749 | 1820.0 | 2475.0 | 0.7354 | 1767.0 | 0.7139 | 874.0 | 898.0 | 1196.0 | 0.7508 | 0.7308 | 884.0 | 922.0 | 1267.0 | 0.7277 | 0.6977 |
| 0.0 | 18.0 | 90 | 1.5168 | 0.0059 | 5415.9772 | 3754.0693 | 1822.0 | 2475.0 | 0.7362 | 1772.0 | 0.7160 | 879.0 | 902.0 | 1196.0 | 0.7542 | 0.7349 | 884.0 | 920.0 | 1267.0 | 0.7261 | 0.6977 |
| 0.0 | 19.0 | 95 | 1.5232 | 0.0059 | 5438.9175 | 3769.9703 | 1822.0 | 2475.0 | 0.7362 | 1774.0 | 0.7168 | 881.0 | 903.0 | 1196.0 | 0.7550 | 0.7366 | 884.0 | 919.0 | 1267.0 | 0.7253 | 0.6977 |
| 0.0 | 20.0 | 100 | 1.5241 | 0.0059 | 5442.2286 | 3772.2654 | 1819.0 | 2475.0 | 0.7349 | 1771.0 | 0.7156 | 884.0 | 905.0 | 1196.0 | 0.7567 | 0.7391 | 878.0 | 914.0 | 1267.0 | 0.7214 | 0.6930 |
| 0.0 | 21.0 | 105 | 1.5278 | 0.0059 | 5455.2160 | 3781.2676 | 1821.0 | 2475.0 | 0.7358 | 1778.0 | 0.7184 | 884.0 | 905.0 | 1196.0 | 0.7567 | 0.7391 | 885.0 | 916.0 | 1267.0 | 0.7230 | 0.6985 |
| 0.6535 | 22.0 | 110 | 1.5296 | 0.0059 | 5461.6471 | 3785.7253 | 1819.0 | 2475.0 | 0.7349 | 1776.0 | 0.7176 | 887.0 | 907.0 | 1196.0 | 0.7584 | 0.7416 | 880.0 | 912.0 | 1267.0 | 0.7198 | 0.6946 |
| 0.0 | 23.0 | 115 | 1.5328 | 0.0059 | 5473.0012 | 3793.5954 | 1821.0 | 2475.0 | 0.7358 | 1782.0 | 0.72 | 888.0 | 907.0 | 1196.0 | 0.7584 | 0.7425 | 885.0 | 914.0 | 1267.0 | 0.7214 | 0.6985 |
| 0.0 | 24.0 | 120 | 1.5339 | 0.0059 | 5477.0890 | 3796.4288 | 1821.0 | 2475.0 | 0.7358 | 1778.0 | 0.7184 | 889.0 | 910.0 | 1196.0 | 0.7609 | 0.7433 | 880.0 | 911.0 | 1267.0 | 0.7190 | 0.6946 |
| 1.3069 | 25.0 | 125 | 1.5357 | 0.0059 | 5483.3601 | 3800.7756 | 1818.0 | 2475.0 | 0.7345 | 1777.0 | 0.7180 | 886.0 | 907.0 | 1196.0 | 0.7584 | 0.7408 | 882.0 | 911.0 | 1267.0 | 0.7190 | 0.6961 |
| 0.0 | 26.0 | 130 | 1.5390 | 0.0059 | 5495.1006 | 3808.9135 | 1820.0 | 2475.0 | 0.7354 | 1779.0 | 0.7188 | 888.0 | 909.0 | 1196.0 | 0.7600 | 0.7425 | 882.0 | 911.0 | 1267.0 | 0.7190 | 0.6961 |
| 0.6534 | 27.0 | 135 | 1.5373 | 0.0059 | 5489.3342 | 3804.9165 | 1820.0 | 2475.0 | 0.7354 | 1782.0 | 0.72 | 889.0 | 908.0 | 1196.0 | 0.7592 | 0.7433 | 884.0 | 912.0 | 1267.0 | 0.7198 | 0.6977 |
| 0.0 | 28.0 | 140 | 1.5419 | 0.0059 | 5505.6494 | 3816.2253 | 1822.0 | 2475.0 | 0.7362 | 1780.0 | 0.7192 | 890.0 | 911.0 | 1196.0 | 0.7617 | 0.7441 | 881.0 | 911.0 | 1267.0 | 0.7190 | 0.6953 |
| 0.0 | 29.0 | 145 | 1.5433 | 0.0059 | 5510.5924 | 3819.6516 | 1821.0 | 2475.0 | 0.7358 | 1779.0 | 0.7188 | 889.0 | 910.0 | 1196.0 | 0.7609 | 0.7433 | 881.0 | 911.0 | 1267.0 | 0.7190 | 0.6953 |
| 0.0 | 30.0 | 150 | 1.5439 | 0.0059 | 5512.6644 | 3821.0878 | 1819.0 | 2475.0 | 0.7349 | 1777.0 | 0.7180 | 889.0 | 909.0 | 1196.0 | 0.7600 | 0.7433 | 879.0 | 910.0 | 1267.0 | 0.7182 | 0.6938 |
| 0.0 | 31.0 | 155 | 1.5443 | 0.0059 | 5514.1591 | 3822.1238 | 1820.0 | 2475.0 | 0.7354 | 1781.0 | 0.7196 | 890.0 | 911.0 | 1196.0 | 0.7617 | 0.7441 | 882.0 | 909.0 | 1267.0 | 0.7174 | 0.6961 |
| 0.6534 | 32.0 | 160 | 1.5471 | 0.0059 | 5524.2001 | 3829.0837 | 1820.0 | 2475.0 | 0.7354 | 1776.0 | 0.7176 | 891.0 | 912.0 | 1196.0 | 0.7625 | 0.7450 | 876.0 | 908.0 | 1267.0 | 0.7167 | 0.6914 |
| 0.0 | 33.0 | 165 | 1.5472 | 0.0059 | 5524.7178 | 3829.4426 | 1821.0 | 2475.0 | 0.7358 | 1778.0 | 0.7184 | 891.0 | 912.0 | 1196.0 | 0.7625 | 0.7450 | 878.0 | 909.0 | 1267.0 | 0.7174 | 0.6930 |
| 0.0 | 34.0 | 170 | 1.5496 | 0.0059 | 5533.2649 | 3835.3670 | 1817.0 | 2475.0 | 0.7341 | 1777.0 | 0.7180 | 890.0 | 911.0 | 1196.0 | 0.7617 | 0.7441 | 878.0 | 906.0 | 1267.0 | 0.7151 | 0.6930 |
| 0.0 | 35.0 | 175 | 1.5519 | 0.0059 | 5541.3527 | 3840.9730 | 1820.0 | 2475.0 | 0.7354 | 1780.0 | 0.7192 | 890.0 | 910.0 | 1196.0 | 0.7609 | 0.7441 | 881.0 | 910.0 | 1267.0 | 0.7182 | 0.6953 |
| 0.0 | 36.0 | 180 | 1.5514 | 0.0059 | 5539.5094 | 3839.6954 | 1820.0 | 2475.0 | 0.7354 | 1781.0 | 0.7196 | 891.0 | 912.0 | 1196.0 | 0.7625 | 0.7450 | 881.0 | 908.0 | 1267.0 | 0.7167 | 0.6953 |
| 0.0 | 37.0 | 185 | 1.5539 | 0.0059 | 5548.5974 | 3845.9946 | 1819.0 | 2475.0 | 0.7349 | 1780.0 | 0.7192 | 891.0 | 912.0 | 1196.0 | 0.7625 | 0.7450 | 880.0 | 907.0 | 1267.0 | 0.7159 | 0.6946 |
| 0.0 | 38.0 | 190 | 1.5534 | 0.0059 | 5546.8413 | 3844.7774 | 1819.0 | 2475.0 | 0.7349 | 1781.0 | 0.7196 | 892.0 | 912.0 | 1196.0 | 0.7625 | 0.7458 | 880.0 | 907.0 | 1267.0 | 0.7159 | 0.6946 |
| 0.0 | 39.0 | 195 | 1.5541 | 0.0059 | 5549.1300 | 3846.3638 | 1820.0 | 2475.0 | 0.7354 | 1781.0 | 0.7196 | 892.0 | 912.0 | 1196.0 | 0.7625 | 0.7458 | 880.0 | 908.0 | 1267.0 | 0.7167 | 0.6946 |
| 0.0 | 40.0 | 200 | 1.5561 | 0.0059 | 5556.3793 | 3851.3886 | 1821.0 | 2475.0 | 0.7358 | 1785.0 | 0.7212 | 894.0 | 914.0 | 1196.0 | 0.7642 | 0.7475 | 882.0 | 907.0 | 1267.0 | 0.7159 | 0.6961 |
| 0.6534 | 41.0 | 205 | 1.5581 | 0.0059 | 5563.5837 | 3856.3823 | 1815.0 | 2475.0 | 0.7333 | 1778.0 | 0.7184 | 891.0 | 910.0 | 1196.0 | 0.7609 | 0.7450 | 878.0 | 905.0 | 1267.0 | 0.7143 | 0.6930 |
| 0.6534 | 42.0 | 210 | 1.5582 | 0.0059 | 5563.8211 | 3856.5469 | 1819.0 | 2475.0 | 0.7349 | 1784.0 | 0.7208 | 893.0 | 913.0 | 1196.0 | 0.7634 | 0.7467 | 882.0 | 906.0 | 1267.0 | 0.7151 | 0.6961 |
| 0.6534 | 43.0 | 215 | 1.5591 | 0.0059 | 5566.9433 | 3858.7111 | 1819.0 | 2475.0 | 0.7349 | 1784.0 | 0.7208 | 895.0 | 915.0 | 1196.0 | 0.7651 | 0.7483 | 880.0 | 904.0 | 1267.0 | 0.7135 | 0.6946 |
| 0.0 | 44.0 | 220 | 1.5600 | 0.0059 | 5570.2078 | 3860.9738 | 1818.0 | 2475.0 | 0.7345 | 1779.0 | 0.7188 | 893.0 | 913.0 | 1196.0 | 0.7634 | 0.7467 | 877.0 | 905.0 | 1267.0 | 0.7143 | 0.6922 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
moekh/Reports-OCR-Training
|
moekh
| 2025-08-18T15:02:49Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-17T10:25:49Z |
---
base_model: Qwen/Qwen2.5-VL-7B-Instruct
library_name: transformers
model_name: Reports-OCR-Training
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for Reports-OCR-Training
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="moekh/Reports-OCR-Training", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/moekh-redf/Reports-OCR-H20/runs/ij335dot)
This model was trained with SFT.
### Framework versions
- TRL: 0.20.0
- Transformers: 4.53.2
- Pytorch: 2.8.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Mgolo/eng_hau
|
Mgolo
| 2025-08-18T14:56:21Z | 0 | 0 | null |
[
"safetensors",
"marian",
"license:apache-2.0",
"region:us"
] | null | 2025-08-18T14:43:54Z |
---
license: apache-2.0
---
|
vinhnx90/gpt-oss-20b-multilingual-reasoning
|
vinhnx90
| 2025-08-18T14:48:56Z | 0 | 1 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gpt_oss",
"trl",
"en",
"base_model:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-18T14:48:51Z |
---
base_model: unsloth/gpt-oss-20b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gpt_oss
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** vinhnx90
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gpt-oss-20b-unsloth-bnb-4bit
This gpt_oss model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
indoempatnol/blockassist-bc-fishy_wary_swan_1755526469
|
indoempatnol
| 2025-08-18T14:43:06Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fishy wary swan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T14:43:01Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fishy wary swan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
dgambettaphd/M_mis_run2_gen3_WXS_doc1000_synt64_lr1e-04_acm_SYNLAST
|
dgambettaphd
| 2025-08-18T14:35:54Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-18T14:35:39Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
baixuefeng/ChemReasoner-7B
|
baixuefeng
| 2025-08-18T14:32:49Z | 1 | 0 | null |
[
"safetensors",
"qwen2",
"region:us"
] | null | 2025-03-30T09:28:55Z |
# MOFReasoner
MOFReasoner (currently released as **ChemReasoner**) is a **domain-specific reasoning large language model (LLM)** designed to think like a scientist. It integrates **Chain-of-Thought (CoT) reasoning** and **knowledge distillation** to enhance scientific reasoning in chemistry, with a particular focus on **Metal-Organic Frameworks (MOFs)** adsorption research.
---
## 🚀 Introduction
General-purpose large language models (LLMs) have shown impressive capabilities in natural language understanding and reasoning. However, their lack of domain specialization limits their ability to perform **multi-step scientific reasoning**.
MOFReasoner addresses this limitation by incorporating domain-specific knowledge, scientific reasoning strategies, and structured CoT reasoning.
Key innovations:
- **Domain Knowledge Integration**: Leveraging over **8,200 research articles** and **500 review papers** to construct a domain-specific CoT dataset.
- **Knowledge Distillation**: Transferring expertise from large teacher models (e.g., DeepSeek-V3, Qwen-Turbo, DeepSeek-R1) into smaller, efficient student models.
- **Scientific Reasoning Skills**: Mimicking scientists’ problem-solving pathways, such as hypothesis generation, validation, and logical deduction.
- **Benchmarking & Applications**: Evaluated on tasks including experimental studies, chemical mechanisms, application scenarios, and industrialization challenges in MOFs research.
---
## 📊 Features
- **Multi-step reasoning** for scientific tasks (experiment design, reaction prediction, performance analysis).
- **Domain specialization** in **MOF adsorption**, catalysis, and chemical mechanism exploration.
- **High performance** compared to general-purpose LLMs (outperforming GPT-4.5, DeepSeek-R1, etc.).
- **Material recommendation ability** with accuracy comparable to Density Functional Theory (DFT).
- **Adaptability**: Easily extendable to other chemistry-related domains by incorporating domain CoT data.
---
## 📥 Model Access
- **Model weights (Hugging Face)**: [ChemReasoner-7B](https://huggingface.co/baixuefeng/ChemReasoner-7B)
- **Code repository (GitHub)**: [ChemReasoner-Code](https://github.com/MontageBai/ChemReasoner-Code)
-
⚠️ Note: The project will soon be renamed to **MOFReasoner**, but the current release is under the name **ChemReasoner**.
---
## ⚙️ How to Use
You can run the model directly from Hugging Face using [vLLM](https://github.com/vllm-project/vllm) or [SGLang](https://github.com/sgl-project/sglang).
### Example with vLLM
```bash
vllm serve baixuefeng/ChemReasoner-7B --tensor-parallel-size 2 --max-model-len 32768 --enforce-eager
```
### Example with SGLang
```bash
python3 -m sglang.launch_server --model baixuefeng/ChemReasoner-7B --trust-remote-code --tp 2
```
---
## 📈 Performance
MOFReasoner was evaluated against leading models (DeepSeek, Qwen, GPT series, etc.) across **four major task categories**:
- **Experimental Studies** of MOFs
- **Chemical Mechanisms** of adsorption
- **Application Scenarios** of MOF-based adsorbents
- **Industrialization Challenges**
Highlights:
- Achieved the **highest expert-evaluated score (25.5/30)**, outperforming GPT-4.5, o1-preview, and DeepSeek-R1.
- Provided **more accurate and reliable reasoning chains**, avoiding serious errors common in general-purpose models.
- Demonstrated **robust material recommendation**, consistent with DFT validation.
---
## 📜 License
- MOFReasoner is released under the **MIT License**.
- Distilled base models (Qwen, LLaMA) retain their original licenses (Apache 2.0 / LLaMA license).
---
## 📚 Citation
If you use MOFReasoner in your research, please cite:
```bibtex
@article{bai2025mofreasoner,
title={MOFReasoner: Think Like a Scientist—A Domain-Specific Reasoning LLM via Knowledge Distillation},
author={Bai, Xuefeng and Zheng, Zhiling and Wang, Hao-Tian and Yang, Rui and Zhang, Xin and Li, Jian-Rong},
}
```
---
## 📬 Contact
- **Corresponding Authors**:
- Prof. Jian-Rong Li, Beijing University of Technology (jrli@bjut.edu.cn)
- Prof. Xin Zhang, Beijing University of Technology (zhang.xin@bjut.edu.cn)
- **Project Maintainers**:
- Xuefeng Bai (Beijing University of Technology)
- Zhiling Zheng (MIT)
|
lisaozill03/blockassist-bc-rugged_prickly_alpaca_1755526015
|
lisaozill03
| 2025-08-18T14:32:17Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rugged prickly alpaca",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T14:32:14Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rugged prickly alpaca
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1755525904
|
Sayemahsjn
| 2025-08-18T14:24:39Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T14:24:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kenil-patel-183/MNIST_classification_using_CNN
|
kenil-patel-183
| 2025-08-18T14:24:12Z | 0 | 0 | null |
[
"mnist",
"pytorch",
"DL",
"CNN",
"image-classification",
"en",
"license:mit",
"region:us"
] |
image-classification
| 2025-08-17T12:47:43Z |
---
language: en
license: mit
tags:
- mnist
- pytorch
- DL
- CNN
pipeline_tag: image-classification
---
# MNIST Classification using CNN
This model predicts digits (0-9) from the MNIST dataset using a custom PyTorch CNN.
# MNIST CNN Model
A PyTorch CNN model trained on MNIST, deployed on Hugging Face for inference.
## How to Use
You can use this model with the Hugging Face Inference API:
```bash
curl https://api-inference.huggingface.co/models/<username>/<model-name> \
-H "Authorization: Bearer YOUR_HF_TOKEN" \
-d '{"inputs": {"image": "<image_url>"}}'
|
pattty007/V_E_C_S
|
pattty007
| 2025-08-18T14:22:19Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-18T13:59:30Z |
---
license: apache-2.0
---
|
nottrz/InternVL3-14B-4bit-bnb-openai-chat
|
nottrz
| 2025-08-18T14:21:59Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"internvl_chat",
"feature-extraction",
"internvl",
"custom_code",
"image-text-to-text",
"conversational",
"multilingual",
"dataset:OpenGVLab/MMPR-v1.2",
"arxiv:2312.14238",
"arxiv:2404.16821",
"arxiv:2412.05271",
"arxiv:2411.10442",
"arxiv:2504.10479",
"arxiv:2412.09616",
"base_model:OpenGVLab/InternVL3-14B-Instruct",
"base_model:finetune:OpenGVLab/InternVL3-14B-Instruct",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] |
image-text-to-text
| 2025-08-18T14:18:35Z |
---
license: apache-2.0
license_name: qwen
license_link: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/main/LICENSE
pipeline_tag: image-text-to-text
library_name: transformers
base_model:
- OpenGVLab/InternVL3-14B-Instruct
base_model_relation: finetune
datasets:
- OpenGVLab/MMPR-v1.2
language:
- multilingual
tags:
- internvl
- custom_code
---
**Disclaimer: Quantized Version with Custom Chat Template**
This model is a 4-bit NF4 quantized version of `OpenGVLab/InternVL3-14B`, processed using the BitsAndBytes library.
The primary modifications are:
1. **Quantization:** Weights are quantized to 4-bit using NF4.
2. **Chat Template:** The tokenizer's chat template has been updated to be compatible with OpenAI's multimodal message format (using `<image>` token for InternVL compatibility) and to handle tool/function calls as specified.
Please refer to the original model card below for details about the base model's architecture, training, and intended use cases.
All credit for the original model architecture and weights goes to the authors of `OpenGVLab/InternVL3-14B`.
---
*The section below is the original README content from OpenGVLab/InternVL3-14B.*
---
# InternVL3-14B
[\[📂 GitHub\]](https://github.com/OpenGVLab/InternVL) [\[📜 InternVL 1.0\]](https://huggingface.co/papers/2312.14238) [\[📜 InternVL 1.5\]](https://huggingface.co/papers/2404.16821) [\[📜 InternVL 2.5\]](https://huggingface.co/papers/2412.05271) [\[📜 InternVL2.5-MPO\]](https://huggingface.co/papers/2411.10442) [\[📜 InternVL3\]](https://huggingface.co/papers/2504.10479)
[\[🆕 Blog\]](https://internvl.github.io/blog/) [\[🗨️ Chat Demo\]](https://internvl.opengvlab.com/) [\[🤗 HF Demo\]](https://huggingface.co/spaces/OpenGVLab/InternVL) [\[🚀 Quick Start\]](#quick-start) [\[📖 Documents\]](https://internvl.readthedocs.io/en/latest/)
<div align="center">
<img width="500" alt="image" src="https://cdn-uploads.huggingface.co/production/uploads/64006c09330a45b03605bba3/zJsd2hqd3EevgXo6fNgC-.png">
</div>
## Introduction
We introduce InternVL3, an advanced multimodal large language model (MLLM) series that demonstrates superior overall performance.
Compared to InternVL 2.5, InternVL3 exhibits superior multimodal perception and reasoning capabilities, while further extending its multimodal capabilities to encompass tool usage, GUI agents, industrial image analysis, 3D vision perception, and more.
Additionally, we compare InternVL3 with Qwen2.5 Chat models, whose corresponding pre-trained base models are employed as the initialization of the langauge component in InternVL3. Benefitting from Native Multimodal Pre-Training, the InternVL3 series achieves even better overall text performance than the Qwen2.5 series.

## InternVL3 Family
In the following table, we provide an overview of the InternVL3 series.
| Model Name | Vision Part | Language Part | HF Link |
| :-----------: | :-------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------: | :------------------------------------------------------: |
| InternVL3-1B | [InternViT-300M-448px-V2_5](https://huggingface.co/OpenGVLab/InternViT-300M-448px-V2_5) | [Qwen2.5-0.5B](https://huggingface.co/Qwen/Qwen2.5-0.5B) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL3-1B) |
| InternVL3-2B | [InternViT-300M-448px-V2_5](https://huggingface.co/OpenGVLab/InternViT-300M-448px-V2_5) | [Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL3-2B) |
| InternVL3-8B | [InternViT-300M-448px-V2_5](https://huggingface.co/OpenGVLab/InternViT-300M-448px-V2_5) | [Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL3-8B) |
| InternVL3-9B | [InternViT-300M-448px-V2_5](https://huggingface.co/OpenGVLab/InternViT-300M-448px-V2_5) | [internlm3-8b-instruct](https://huggingface.co/internlm/internlm3-8b-instruct) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL3-9B) |
| InternVL3-14B | [InternViT-300M-448px-V2_5](https://huggingface.co/OpenGVLab/InternViT-300M-448px-V2_5) | [Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL3-14B) |
| InternVL3-38B | [InternViT-6B-448px-V2_5](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V2_5) | [Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL3-38B) |
| InternVL3-78B | [InternViT-6B-448px-V2_5](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V2_5) | [Qwen2.5-72B](https://huggingface.co/Qwen/Qwen2.5-72B) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL3-78B) |

## Model Architecture
As shown in the following figure, [InternVL3](https://internvl.github.io/blog/2025-04-11-InternVL-3/) retains the same model architecture as [InternVL 2.5](https://internvl.github.io/blog/2024-12-05-InternVL-2.5/) and its predecessors, InternVL 1.5 and 2.0, following the "ViT-MLP-LLM" paradigm. In this new version, we integrate a newly incrementally pre-trained InternViT with various pre-trained LLMs, including InternLM 3 and Qwen 2.5, using a randomly initialized MLP projector.

As in the previous version, we applied a pixel unshuffle operation, reducing the number of visual tokens to one-quarter of the original. Besides, we adopted a similar dynamic resolution strategy as InternVL 1.5, dividing images into tiles of 448×448 pixels. The key difference, starting from InternVL 2.0, is that we additionally introduced support for multi-image and video data.
Notably, in InternVL3, we integrate the [Variable Visual Position Encoding (V2PE)](https://arxiv.org/abs/2412.09616), which utilizes smaller, more flexible position increments for visual tokens. Benefiting from V2PE, InternVL3 exhibits better long context understanding capabilities compared to its predecessors.
## Training Strategy
### Native Multimodal Pre-Training
We propose a [Native Multimodal Pre-Training](https://huggingface.co/papers/2504.10479) approach that consolidates language and vision learning into a single pre-training stage.
In contrast to standard paradigms that first train a language-only model and subsequently adapt it to handle additional modalities, our method interleaves multimodal data (e.g., image-text, video-text, or image-text interleaved sequences) with large-scale textual corpora. This unified training scheme allows the model to learn both linguistic and multimodal representations simultaneously, ultimately enhancing its capability to handle vision-language tasks without the need for separate alignment or bridging modules.
Please see [our paper](https://huggingface.co/papers/2504.10479) for more details.
### Supervised Fine-Tuning
In this phase, the techniques of random JPEG compression, square loss re-weighting, and multimodal data packing proposed in [InternVL2.5](https://arxiv.org/abs/2412.05271) are also employed in the InternVL3 series.
The main advancement of the SFT phase in InternVL3 compared to InternVL2.5 lies in the use of higher-quality and more diverse training data.
Specifically, we further extend training samples for tool use, 3D scene understanding, GUI operations, long context tasks, video understanding, scientific diagrams, creative writing, and multimodal reasoning.
### Mixed Preference Optimization
During Pre-training and SFT, the model is trained to predict the next token conditioned on previous ground-truth tokens.
However, during inference, the model predicts each token based on its own prior outputs.
This discrepancy between ground-truth tokens and model-predicted tokens introduces a distribution shift, which can impair the model’s Chain-of-Thought (CoT) reasoning capabilities.
To mitigate this issue, we employ [MPO](https://arxiv.org/abs/2411.10442), which introduces additional supervision from both positive and negative samples to align the model response distribution with the ground-truth distribution, thereby improving reasoning performance.
Specifically, the training objective of MPO is a combination of
preference loss \\(\mathcal{L}_{\text{p}}\\),
quality loss \\(\mathcal{L}_{\text{q}}\\),
and generation loss \\(\mathcal{L}_{\text{g}}\\),
which can be formulated as follows:
$$
\mathcal{L}=w_{p}\cdot\mathcal{L}_{\text{p}} + w_{q}\cdot\mathcal{L}_{\text{q}} + w_{g}\cdot\mathcal{L}_{\text{g}},
$$
where \\(w_{*}\\) represents the weight assigned to each loss component. Please see [our paper](https://arxiv.org/abs/2411.10442) for more details about MPO.
### Test-Time Scaling
Test-Time Scaling has been shown to be an effective method to enhance the reasoning abilities of LLMs and MLLMs.
In this work, we use the Best-of-N evaluation strategy and employ [VisualPRM-8B](https://huggingface.co/OpenGVLab/VisualPRM-8B) as the critic model to select the best response for reasoning and mathematics evaluation.
## Evaluation on Multimodal Capability
### Multimodal Reasoning and Mathematics

### OCR, Chart, and Document Understanding

### Multi-Image & Real-World Comprehension

### Comprehensive Multimodal & Hallucination Evaluation

### Visual Grounding

### Multimodal Multilingual Understanding

### Video Understanding

### GUI Grounding

### Spatial Reasoning

## Evaluation on Language Capability
We compare InternVL3 with Qwen2.5 Chat models, whose corresponding pre-trained base models are employed as the initialization of the langauge component in InternVL3.
Benefitting from Native Multimodal Pre-Training, the InternVL3 series achieves even better overall text performance than the Qwen2.5 series.
Please note that the evaluation scores of Qwen2.5 series may differ from those officially reported, as we have adopted the prompt versions provided in the table across all datasets for OpenCompass evaluation.

## Ablation Study
### Native Multimodal Pre-Training
We conduct experiments on the InternVL2-8B model while keeping its architecture, initialization parameters, and training data entirely unchanged. Traditionally, InternVL2-8B employs a training pipeline that begins with an MLP warmup phase for feature alignment followed by an Instruction Tuning stage. In our experiments, we substitute the conventional MLP warmup phase with a native multimodal pre-training process. This modification isolates the contribution of native multimodal pre-training to the overall multimodal capability of the model.
The evaluation results in the Figure below shows that the model with native multimodal pre-training exhibits performance on most benchmarks that is comparable to the fully multi-stage-trained InternVL2-8B baseline. Furthermore, when followed by instruction tuning on higher-quality data, the model demonstrates further performance gains across evaluated multimodal tasks. These findings underscore the efficiency of native multimodal pre-training in imparting powerful multimodal capabilities to MLLMs.

### Mixed Preference Optimization
As shown in the table below, models fine-tuned with MPO demonstrate superior reasoning performance across seven multimodal reasoning benchmarks compared to their counterparts without MPO. Specifically, InternVL3-78B and InternVL3-38B outperform their counterparts by 4.1 and 4.5 points, respectively. Notably, the training data used for MPO is a subset of that used for SFT, indicating that the performance improvements primarily stem from the training algorithm rather than the training data.

### Variable Visual Position Encoding
As reported in the table below, the introduction of V2PE leads to significant performance gains across most evaluation metrics. In addition, our ablation studies—by varying the positional increment \\( \delta \\)—reveal that even for tasks primarily involving conventional contexts, relatively small \\( \delta \\) values can achieve optimal performance. These findings provide important insights for future efforts aimed at refining position encoding strategies for visual tokens in MLLMs.

## Quick Start
We provide an example code to run `InternVL3-14B` using `transformers`.
> Please use transformers>=4.37.2 to ensure the model works normally.
### Model Loading
#### 16-bit (bf16 / fp16)
```python
import torch
from transformers import AutoTokenizer, AutoModel
path = "OpenGVLab/InternVL3-14B"
model = AutoModel.from_pretrained(
path,
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
use_flash_attn=True,
trust_remote_code=True).eval().cuda()
```
#### BNB 8-bit Quantization
```python
import torch
from transformers import AutoTokenizer, AutoModel
path = "OpenGVLab/InternVL3-14B"
model = AutoModel.from_pretrained(
path,
torch_dtype=torch.bfloat16,
load_in_8bit=True,
low_cpu_mem_usage=True,
use_flash_attn=True,
trust_remote_code=True).eval()
```
#### Multiple GPUs
The reason for writing the code this way is to avoid errors that occur during multi-GPU inference due to tensors not being on the same device. By ensuring that the first and last layers of the large language model (LLM) are on the same device, we prevent such errors.
```python
import math
import torch
from transformers import AutoTokenizer, AutoModel
def split_model(model_name):
device_map = {}
world_size = torch.cuda.device_count()
config = AutoConfig.from_pretrained(model_path, trust_remote_code=True)
num_layers = config.llm_config.num_hidden_layers
# Since the first GPU will be used for ViT, treat it as half a GPU.
num_layers_per_gpu = math.ceil(num_layers / (world_size - 0.5))
num_layers_per_gpu = [num_layers_per_gpu] * world_size
num_layers_per_gpu[0] = math.ceil(num_layers_per_gpu[0] * 0.5)
layer_cnt = 0
for i, num_layer in enumerate(num_layers_per_gpu):
for j in range(num_layer):
device_map[f'language_model.model.layers.{layer_cnt}'] = i
layer_cnt += 1
device_map['vision_model'] = 0
device_map['mlp1'] = 0
device_map['language_model.model.tok_embeddings'] = 0
device_map['language_model.model.embed_tokens'] = 0
device_map['language_model.output'] = 0
device_map['language_model.model.norm'] = 0
device_map['language_model.model.rotary_emb'] = 0
device_map['language_model.lm_head'] = 0
device_map[f'language_model.model.layers.{num_layers - 1}'] = 0
return device_map
path = "OpenGVLab/InternVL3-14B"
device_map = split_model('InternVL3-14B')
model = AutoModel.from_pretrained(
path,
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
use_flash_attn=True,
trust_remote_code=True,
device_map=device_map).eval()
```
### Inference with Transformers
```python
import math
import numpy as np
import torch
import torchvision.transforms as T
from decord import VideoReader, cpu
from PIL import Image
from torchvision.transforms.functional import InterpolationMode
from transformers import AutoModel, AutoTokenizer
IMAGENET_MEAN = (0.485, 0.456, 0.406)
IMAGENET_STD = (0.229, 0.224, 0.225)
def build_transform(input_size):
MEAN, STD = IMAGENET_MEAN, IMAGENET_STD
transform = T.Compose([
T.Lambda(lambda img: img.convert('RGB') if img.mode != 'RGB' else img),
T.Resize((input_size, input_size), interpolation=InterpolationMode.BICUBIC),
T.ToTensor(),
T.Normalize(mean=MEAN, std=STD)
])
return transform
def find_closest_aspect_ratio(aspect_ratio, target_ratios, width, height, image_size):
best_ratio_diff = float('inf')
best_ratio = (1, 1)
area = width * height
for ratio in target_ratios:
target_aspect_ratio = ratio[0] / ratio[1]
ratio_diff = abs(aspect_ratio - target_aspect_ratio)
if ratio_diff < best_ratio_diff:
best_ratio_diff = ratio_diff
best_ratio = ratio
elif ratio_diff == best_ratio_diff:
if area > 0.5 * image_size * image_size * ratio[0] * ratio[1]:
best_ratio = ratio
return best_ratio
def dynamic_preprocess(image, min_num=1, max_num=12, image_size=448, use_thumbnail=False):
orig_width, orig_height = image.size
aspect_ratio = orig_width / orig_height
# calculate the existing image aspect ratio
target_ratios = set(
(i, j) for n in range(min_num, max_num + 1) for i in range(1, n + 1) for j in range(1, n + 1) if
i * j <= max_num and i * j >= min_num)
target_ratios = sorted(target_ratios, key=lambda x: x[0] * x[1])
# find the closest aspect ratio to the target
target_aspect_ratio = find_closest_aspect_ratio(
aspect_ratio, target_ratios, orig_width, orig_height, image_size)
# calculate the target width and height
target_width = image_size * target_aspect_ratio[0]
target_height = image_size * target_aspect_ratio[1]
blocks = target_aspect_ratio[0] * target_aspect_ratio[1]
# resize the image
resized_img = image.resize((target_width, target_height))
processed_images = []
for i in range(blocks):
box = (
(i % (target_width // image_size)) * image_size,
(i // (target_width // image_size)) * image_size,
((i % (target_width // image_size)) + 1) * image_size,
((i // (target_width // image_size)) + 1) * image_size
)
# split the image
split_img = resized_img.crop(box)
processed_images.append(split_img)
assert len(processed_images) == blocks
if use_thumbnail and len(processed_images) != 1:
thumbnail_img = image.resize((image_size, image_size))
processed_images.append(thumbnail_img)
return processed_images
def load_image(image_file, input_size=448, max_num=12):
image = Image.open(image_file).convert('RGB')
transform = build_transform(input_size=input_size)
images = dynamic_preprocess(image, image_size=input_size, use_thumbnail=True, max_num=max_num)
pixel_values = [transform(image) for image in images]
pixel_values = torch.stack(pixel_values)
return pixel_values
def split_model(model_name):
device_map = {}
world_size = torch.cuda.device_count()
config = AutoConfig.from_pretrained(model_path, trust_remote_code=True)
num_layers = config.llm_config.num_hidden_layers
# Since the first GPU will be used for ViT, treat it as half a GPU.
num_layers_per_gpu = math.ceil(num_layers / (world_size - 0.5))
num_layers_per_gpu = [num_layers_per_gpu] * world_size
num_layers_per_gpu[0] = math.ceil(num_layers_per_gpu[0] * 0.5)
layer_cnt = 0
for i, num_layer in enumerate(num_layers_per_gpu):
for j in range(num_layer):
device_map[f'language_model.model.layers.{layer_cnt}'] = i
layer_cnt += 1
device_map['vision_model'] = 0
device_map['mlp1'] = 0
device_map['language_model.model.tok_embeddings'] = 0
device_map['language_model.model.embed_tokens'] = 0
device_map['language_model.output'] = 0
device_map['language_model.model.norm'] = 0
device_map['language_model.model.rotary_emb'] = 0
device_map['language_model.lm_head'] = 0
device_map[f'language_model.model.layers.{num_layers - 1}'] = 0
return device_map
# If you set `load_in_8bit=True`, you will need two 80GB GPUs.
# If you set `load_in_8bit=False`, you will need at least three 80GB GPUs.
path = 'OpenGVLab/InternVL3-14B'
device_map = split_model('InternVL3-14B')
model = AutoModel.from_pretrained(
path,
torch_dtype=torch.bfloat16,
load_in_8bit=False,
low_cpu_mem_usage=True,
use_flash_attn=True,
trust_remote_code=True,
device_map=device_map).eval()
tokenizer = AutoTokenizer.from_pretrained(path, trust_remote_code=True, use_fast=False)
# set the max number of tiles in `max_num`
pixel_values = load_image('./examples/image1.jpg', max_num=12).to(torch.bfloat16).cuda()
generation_config = dict(max_new_tokens=1024, do_sample=True)
# pure-text conversation (纯文本对话)
question = 'Hello, who are you?'
response, history = model.chat(tokenizer, None, question, generation_config, history=None, return_history=True)
print(f'User: {question}\nAssistant: {response}')
question = 'Can you tell me a story?'
response, history = model.chat(tokenizer, None, question, generation_config, history=history, return_history=True)
print(f'User: {question}\nAssistant: {response}')
# single-image single-round conversation (单图单轮对话)
question = '<image>\nPlease describe the image shortly.'
response = model.chat(tokenizer, pixel_values, question, generation_config)
print(f'User: {question}\nAssistant: {response}')
# single-image multi-round conversation (单图多轮对话)
question = '<image>\nPlease describe the image in detail.'
response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=None, return_history=True)
print(f'User: {question}\nAssistant: {response}')
question = 'Please write a poem according to the image.'
response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=history, return_history=True)
print(f'User: {question}\nAssistant: {response}')
# multi-image multi-round conversation, combined images (多图多轮对话,拼接图像)
pixel_values1 = load_image('./examples/image1.jpg', max_num=12).to(torch.bfloat16).cuda()
pixel_values2 = load_image('./examples/image2.jpg', max_num=12).to(torch.bfloat16).cuda()
pixel_values = torch.cat((pixel_values1, pixel_values2), dim=0)
question = '<image>\nDescribe the two images in detail.'
response, history = model.chat(tokenizer, pixel_values, question, generation_config,
history=None, return_history=True)
print(f'User: {question}\nAssistant: {response}')
question = 'What are the similarities and differences between these two images.'
response, history = model.chat(tokenizer, pixel_values, question, generation_config,
history=history, return_history=True)
print(f'User: {question}\nAssistant: {response}')
# multi-image multi-round conversation, separate images (多图多轮对话,独立图像)
pixel_values1 = load_image('./examples/image1.jpg', max_num=12).to(torch.bfloat16).cuda()
pixel_values2 = load_image('./examples/image2.jpg', max_num=12).to(torch.bfloat16).cuda()
pixel_values = torch.cat((pixel_values1, pixel_values2), dim=0)
num_patches_list = [pixel_values1.size(0), pixel_values2.size(0)]
question = 'Image-1: <image>\nImage-2: <image>\nDescribe the two images in detail.'
response, history = model.chat(tokenizer, pixel_values, question, generation_config,
num_patches_list=num_patches_list,
history=None, return_history=True)
print(f'User: {question}\nAssistant: {response}')
question = 'What are the similarities and differences between these two images.'
response, history = model.chat(tokenizer, pixel_values, question, generation_config,
num_patches_list=num_patches_list,
history=history, return_history=True)
print(f'User: {question}\nAssistant: {response}')
# batch inference, single image per sample (单图批处理)
pixel_values1 = load_image('./examples/image1.jpg', max_num=12).to(torch.bfloat16).cuda()
pixel_values2 = load_image('./examples/image2.jpg', max_num=12).to(torch.bfloat16).cuda()
num_patches_list = [pixel_values1.size(0), pixel_values2.size(0)]
pixel_values = torch.cat((pixel_values1, pixel_values2), dim=0)
questions = ['<image>\nDescribe the image in detail.'] * len(num_patches_list)
responses = model.batch_chat(tokenizer, pixel_values,
num_patches_list=num_patches_list,
questions=questions,
generation_config=generation_config)
for question, response in zip(questions, responses):
print(f'User: {question}\nAssistant: {response}')
# video multi-round conversation (视频多轮对话)
def get_index(bound, fps, max_frame, first_idx=0, num_segments=32):
if bound:
start, end = bound[0], bound[1]
else:
start, end = -100000, 100000
start_idx = max(first_idx, round(start * fps))
end_idx = min(round(end * fps), max_frame)
seg_size = float(end_idx - start_idx) / num_segments
frame_indices = np.array([
int(start_idx + (seg_size / 2) + np.round(seg_size * idx))
for idx in range(num_segments)
])
return frame_indices
def load_video(video_path, bound=None, input_size=448, max_num=1, num_segments=32):
vr = VideoReader(video_path, ctx=cpu(0), num_threads=1)
max_frame = len(vr) - 1
fps = float(vr.get_avg_fps())
pixel_values_list, num_patches_list = [], []
transform = build_transform(input_size=input_size)
frame_indices = get_index(bound, fps, max_frame, first_idx=0, num_segments=num_segments)
for frame_index in frame_indices:
img = Image.fromarray(vr[frame_index].asnumpy()).convert('RGB')
img = dynamic_preprocess(img, image_size=input_size, use_thumbnail=True, max_num=max_num)
pixel_values = [transform(tile) for tile in img]
pixel_values = torch.stack(pixel_values)
num_patches_list.append(pixel_values.shape[0])
pixel_values_list.append(pixel_values)
pixel_values = torch.cat(pixel_values_list)
return pixel_values, num_patches_list
video_path = './examples/red-panda.mp4'
pixel_values, num_patches_list = load_video(video_path, num_segments=8, max_num=1)
pixel_values = pixel_values.to(torch.bfloat16).cuda()
video_prefix = ''.join([f'Frame{i+1}: <image>\n' for i in range(len(num_patches_list))])
question = video_prefix + 'What is the red panda doing?'
# Frame1: <image>\nFrame2: <image>\n...\nFrame8: <image>\n{question}
response, history = model.chat(tokenizer, pixel_values, question, generation_config,
num_patches_list=num_patches_list, history=None, return_history=True)
print(f'User: {question}\nAssistant: {response}')
question = 'Describe this video in detail.'
response, history = model.chat(tokenizer, pixel_values, question, generation_config,
num_patches_list=num_patches_list, history=history, return_history=True)
print(f'User: {question}\nAssistant: {response}')
```
#### Streaming Output
Besides this method, you can also use the following code to get streamed output.
```python
from transformers import TextIteratorStreamer
from threading import Thread
# Initialize the streamer
streamer = TextIteratorStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True, timeout=10)
# Define the generation configuration
generation_config = dict(max_new_tokens=1024, do_sample=False, streamer=streamer)
# Start the model chat in a separate thread
thread = Thread(target=model.chat, kwargs=dict(
tokenizer=tokenizer, pixel_values=pixel_values, question=question,
history=None, return_history=False, generation_config=generation_config,
))
thread.start()
# Initialize an empty string to store the generated text
generated_text = ''
# Loop through the streamer to get the new text as it is generated
for new_text in streamer:
if new_text == model.conv_template.sep:
break
generated_text += new_text
print(new_text, end='', flush=True) # Print each new chunk of generated text on the same line
```
## Finetune
Many repositories now support fine-tuning of the InternVL series models, including [InternVL](https://github.com/OpenGVLab/InternVL), [SWIFT](https://github.com/modelscope/ms-swift), [XTurner](https://github.com/InternLM/xtuner), and others. Please refer to their documentation for more details on fine-tuning.
## Deployment
### LMDeploy
LMDeploy is a toolkit for compressing, deploying, and serving LLMs & VLMs.
```sh
# if lmdeploy<0.7.3, you need to explicitly set chat_template_config=ChatTemplateConfig(model_name='internvl2_5')
pip install lmdeploy>=0.7.3
```
LMDeploy abstracts the complex inference process of multi-modal Vision-Language Models (VLM) into an easy-to-use pipeline, similar to the Large Language Model (LLM) inference pipeline.
#### A 'Hello, world' Example
```python
from lmdeploy import pipeline, TurbomindEngineConfig, ChatTemplateConfig
from lmdeploy.vl import load_image
model = 'OpenGVLab/InternVL3-14B'
image = load_image('https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/tests/data/tiger.jpeg')
pipe = pipeline(model, backend_config=TurbomindEngineConfig(session_len=16384, tp=1), chat_template_config=ChatTemplateConfig(model_name='internvl2_5'))
response = pipe(('describe this image', image))
print(response.text)
```
If `ImportError` occurs while executing this case, please install the required dependency packages as prompted.
#### Multi-images Inference
When dealing with multiple images, you can put them all in one list. Keep in mind that multiple images will lead to a higher number of input tokens, and as a result, the size of the context window typically needs to be increased.
```python
from lmdeploy import pipeline, TurbomindEngineConfig, ChatTemplateConfig
from lmdeploy.vl import load_image
from lmdeploy.vl.constants import IMAGE_TOKEN
model = 'OpenGVLab/InternVL3-14B'
pipe = pipeline(model, backend_config=TurbomindEngineConfig(session_len=16384, tp=1), chat_template_config=ChatTemplateConfig(model_name='internvl2_5'))
image_urls=[
'https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/demo/resources/human-pose.jpg',
'https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/demo/resources/det.jpg'
]
images = [load_image(img_url) for img_url in image_urls]
# Numbering images improves multi-image conversations
response = pipe((f'Image-1: {IMAGE_TOKEN}\nImage-2: {IMAGE_TOKEN}\ndescribe these two images', images))
print(response.text)
```
#### Batch Prompts Inference
Conducting inference with batch prompts is quite straightforward; just place them within a list structure:
```python
from lmdeploy import pipeline, TurbomindEngineConfig, ChatTemplateConfig
from lmdeploy.vl import load_image
model = 'OpenGVLab/InternVL3-14B'
pipe = pipeline(model, backend_config=TurbomindEngineConfig(session_len=16384, tp=1), chat_template_config=ChatTemplateConfig(model_name='internvl2_5'))
image_urls=[
"https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/demo/resources/human-pose.jpg",
"https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/demo/resources/det.jpg"
]
prompts = [('describe this image', load_image(img_url)) for img_url in image_urls]
response = pipe(prompts)
print(response)
```
#### Multi-turn Conversation
There are two ways to do the multi-turn conversations with the pipeline. One is to construct messages according to the format of OpenAI and use above introduced method, the other is to use the `pipeline.chat` interface.
```python
from lmdeploy import pipeline, TurbomindEngineConfig, GenerationConfig, ChatTemplateConfig
from lmdeploy.vl import load_image
model = 'OpenGVLab/InternVL3-14B'
pipe = pipeline(model, backend_config=TurbomindEngineConfig(session_len=16384, tp=1), chat_template_config=ChatTemplateConfig(model_name='internvl2_5'))
image = load_image('https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/demo/resources/human-pose.jpg')
gen_config = GenerationConfig(top_k=40, top_p=0.8, temperature=0.8)
sess = pipe.chat(('describe this image', image), gen_config=gen_config)
print(sess.response.text)
sess = pipe.chat('What is the woman doing?', session=sess, gen_config=gen_config)
print(sess.response.text)
```
#### Service
LMDeploy's `api_server` enables models to be easily packed into services with a single command. The provided RESTful APIs are compatible with OpenAI's interfaces. Below are an example of service startup:
```shell
lmdeploy serve api_server OpenGVLab/InternVL3-14B --chat-template internvl2_5 --server-port 23333 --tp 1
```
To use the OpenAI-style interface, you need to install OpenAI:
```shell
pip install openai
```
Then, use the code below to make the API call:
```python
from openai import OpenAI
client = OpenAI(api_key='YOUR_API_KEY', base_url='http://0.0.0.0:23333/v1')
model_name = client.models.list().data[0].id
response = client.chat.completions.create(
model=model_name,
messages=[{
'role':
'user',
'content': [{
'type': 'text',
'text': 'describe this image',
}, {
'type': 'image_url',
'image_url': {
'url':
'https://modelscope.oss-cn-beijing.aliyuncs.com/resource/tiger.jpeg',
},
}],
}],
temperature=0.8,
top_p=0.8)
print(response)
```
## License
This project is released under the MIT License. This project uses the pre-trained Qwen2.5 as a component, which is licensed under the Apache-2.0 License.
## Citation
If you find this project useful in your research, please consider citing:
```BibTeX
@article{chen2024expanding,
title={Expanding Performance Boundaries of Open-Source Multimodal Models with Model, Data, and Test-Time Scaling},
author={Chen, Zhe and Wang, Weiyun and Cao, Yue and Liu, Yangzhou and Gao, Zhangwei and Cui, Erfei and Zhu, Jinguo and Ye, Shenglong and Tian, Hao and Liu, Zhaoyang and others},
journal={arXiv preprint arXiv:2412.05271},
year={2024}
}
@article{wang2024mpo,
title={Enhancing the Reasoning Ability of Multimodal Large Language Models via Mixed Preference Optimization},
author={Wang, Weiyun and Chen, Zhe and Wang, Wenhai and Cao, Yue and Liu, Yangzhou and Gao, Zhangwei and Zhu, Jinguo and Zhu, Xizhou and Lu, Lewei and Qiao, Yu and Dai, Jifeng},
journal={arXiv preprint arXiv:2411.10442},
year={2024}
}
@article{chen2024far,
title={How Far Are We to GPT-4V? Closing the Gap to Commercial Multimodal Models with Open-Source Suites},
author={Chen, Zhe and Wang, Weiyun and Tian, Hao and Ye, Shenglong and Gao, Zhangwei and Cui, Erfei and Tong, Wenwen and Hu, Kongzhi and Luo, Jiapeng and Ma, Zheng and others},
journal={arXiv preprint arXiv:2404.16821},
year={2024}
}
@inproceedings{chen2024internvl,
title={Internvl: Scaling up vision foundation models and aligning for generic visual-linguistic tasks},
author={Chen, Zhe and Wu, Jiannan and Wang, Wenhai and Su, Weijie and Chen, Guo and Xing, Sen and Zhong, Muyan and Zhang, Qinglong and Zhu, Xizhou and Lu, Lewei and others},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={24185--24198},
year={2024}
}
```
|
cicboy/my_awesome_qa_model
|
cicboy
| 2025-08-18T14:20:45Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"question-answering",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2025-08-18T09:49:04Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: my_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_qa_model
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1538
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2139 | 1.0 | 5475 | 1.1628 |
| 0.9801 | 2.0 | 10950 | 1.1107 |
| 0.7503 | 3.0 | 16425 | 1.1538 |
### Framework versions
- Transformers 4.55.1
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
ench100/bodyandface
|
ench100
| 2025-08-18T14:18:21Z | 59 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:lodestones/Chroma",
"base_model:adapter:lodestones/Chroma",
"region:us"
] |
text-to-image
| 2025-08-12T08:58:41Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- output:
url: images/2.png
text: '-'
base_model: lodestones/Chroma
instance_prompt: null
---
# forME
<Gallery />
## Download model
[Download](/ench100/bodyandface/tree/main) them in the Files & versions tab.
|
WaiLwin/topology_results
|
WaiLwin
| 2025-08-18T14:18:07Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-18T14:17:52Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: topology_results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# topology_results
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0109
- Accuracy: 0.9977
- F1: 0.9977
- Precision: 0.9977
- Recall: 0.9977
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.0003 | 1.0 | 1004 | 0.0130 | 0.9977 | 0.9977 | 0.9977 | 0.9977 |
| 0.0001 | 2.0 | 2008 | 0.0158 | 0.9965 | 0.9965 | 0.9965 | 0.9965 |
| 0.0001 | 3.0 | 3012 | 0.0036 | 0.9988 | 0.9988 | 0.9988 | 0.9988 |
### Framework versions
- Transformers 4.55.1
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
BootesVoid/cmeh250ii0o6srts8j6939u0n_cmeh53idu0objrts8tgpz0o5o
|
BootesVoid
| 2025-08-18T14:09:52Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-18T14:09:51Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: RUBIO0808
---
# Cmeh250Ii0O6Srts8J6939U0N_Cmeh53Idu0Objrts8Tgpz0O5O
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `RUBIO0808` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "RUBIO0808",
"lora_weights": "https://huggingface.co/BootesVoid/cmeh250ii0o6srts8j6939u0n_cmeh53idu0objrts8tgpz0o5o/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmeh250ii0o6srts8j6939u0n_cmeh53idu0objrts8tgpz0o5o', weight_name='lora.safetensors')
image = pipeline('RUBIO0808').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmeh250ii0o6srts8j6939u0n_cmeh53idu0objrts8tgpz0o5o/discussions) to add images that show off what you’ve made with this LoRA.
|
JWHaHa/Qwen2.5-7B-Instruct-SCGF-GGUF
|
JWHaHa
| 2025-08-18T14:06:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-18T14:06:23Z |
---
base_model: unsloth/qwen2.5-7b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** JWHaHa
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-instruct-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
inclusionAI/GroveMoE-Inst
|
inclusionAI
| 2025-08-18T14:06:10Z | 0 | 7 |
transformers
|
[
"transformers",
"safetensors",
"qwen3_moe",
"text-generation",
"conversational",
"custom_code",
"arxiv:2508.07785",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-18T05:28:51Z |
---
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
---
# GroveMoE-Inst
## Highlights
We introduce **GroveMoE**, a new sparse architecture using **adjugate experts** for dynamic computation allocation, featuring the following key highlights:
- **Architecture**: Novel **adjugate experts** grouped with ordinary experts; shared computation is executed once, then reused, cutting FLOPs.
- **Sparse Activation**: 33 B params total, only **3.14–3.28 B** active per token.
- **Traning**: Mid-training + SFT, up-cycled from Qwen3-30B-A3B-Base; preserves prior knowledge while adding new capabilities.
## Model Downloads
| **Model** | **#Total Params** | **#Activated Params** | **Download** |
|:---------:|:-----------------:|:---------------------:|:------------:|
| GroveMoE-Base | 33B | 3.14~3.28B | [🤗 HuggingFace](https://huggingface.co/inclusionAI/GroveMoE-Base) |
| GroveMoE-Inst | 33B | 3.14~3.28B | [🤗 HuggingFace](https://huggingface.co/inclusionAI/GroveMoE-Inst) |
## Performance
| Model | Activated Params | MMLU-Pro | SuperGPQA | GPQA-Diamond | OlympiadBench | Omni-math | AIME'25 | MultiPL-E | LiveCodeBench v6 |
|:-----:|:----------------:|:------------:|:-------------:|:------------:|:-----------------:|:------------:|:------------------:|:------------------:|:------------------:|
|Llama4-Scout| 17B | 64.9 | 42.0 | 55.6 | 56.6 | 30.2 | 10.0 | 45.0 | 32.0 |
|Qwen3-30B-A3B| 3B | 63.3 | 40.5 | 51.7 | 60.3 | 33.7 | 21.7 | 66.0 | 29.4 |
|Qwen3-32B| 32B | 68.2 | 43.0 | 53.6 | 59.5 | 31.8 | 22.9 | 68.6 | 28.6 |
|Gemma3-27B-IT| 27B | 67.1 | 35.6 | 45.3 | 59.9 | 33.3 | 23.1 | 65.5 | 30.9 |
|Mistral-Small-3.2| 24B | 68.1 | 37.5 | 59.9 | 61.9 | 33.4 | 28.1 | 69.5 | 32.2 |
|GroveMoE-Inst|3.14~3.28B | <font color=#FBD98D>**72.8**</font> | <font color=#FBD98D>**47.7**</font> | <font color=#FBD98D>**61.3**</font> |<font color=#FBD98D>**71.2**</font> |<font color=#FBD98D>**43.5**</font> | <font color=#FBD98D>**44.4**</font> |<font color=#FBD98D>**74.5**</font> | <font color=#FBD98D>**34.6**</font> |
We bold the top1 scores separately for all models. More details are reported in our [technical report](https://arxiv.org/abs/2508.07785).
## Usage
Below, there are some code snippets on how to get quickly started with running the model. First, install the Transformers library.
```sh
$ pip install transformers==4.51.3
```
Then, copy the snippet from the section that is relevant for your use case.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "inclusionAI/GroveMoE-Inst"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=16384
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
content = tokenizer.decode(output_ids, skip_special_tokens=True)
print("content:", content)
```
## Citation
```bibtex
@article{GroveMoE,
title = {GroveMoE: Towards Efficient and Superior MoE LLMs with Adjugate Experts},
author = {Wu, Haoyuan and Chen, Haoxing and Chen, Xiaodong and Zhou, Zhanchao and Chen, Tieyuan and Zhuang, Yihong and Lu, Guoshan and Zhao, Junbo and Liu, Lin and Huang, Zenan and Lan, Zhenzhong and Yu, Bei and Li, Jianguo},
journal = {arXiv preprint arXiv:2508.07785},
year = {2025}
}
```
|
moscowx21/blockassist-bc-extinct_bipedal_clam_1755525846
|
moscowx21
| 2025-08-18T14:04:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"extinct bipedal clam",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T14:04:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- extinct bipedal clam
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
zehuajun/Huihui-Qwen3-30B-A3B-Thinking-2507-abliterated-Q4_K_M-GGUF
|
zehuajun
| 2025-08-18T14:01:37Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"abliterated",
"uncensored",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:huihui-ai/Huihui-Qwen3-30B-A3B-Thinking-2507-abliterated",
"base_model:quantized:huihui-ai/Huihui-Qwen3-30B-A3B-Thinking-2507-abliterated",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-18T14:00:22Z |
---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-30B-A3B-Thinking-2507/blob/main/LICENSE
language:
- en
base_model: huihui-ai/Huihui-Qwen3-30B-A3B-Thinking-2507-abliterated
pipeline_tag: text-generation
library_name: transformers
tags:
- abliterated
- uncensored
- llama-cpp
- gguf-my-repo
---
# zehuajun/Huihui-Qwen3-30B-A3B-Thinking-2507-abliterated-Q4_K_M-GGUF
This model was converted to GGUF format from [`huihui-ai/Huihui-Qwen3-30B-A3B-Thinking-2507-abliterated`](https://huggingface.co/huihui-ai/Huihui-Qwen3-30B-A3B-Thinking-2507-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/huihui-ai/Huihui-Qwen3-30B-A3B-Thinking-2507-abliterated) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo zehuajun/Huihui-Qwen3-30B-A3B-Thinking-2507-abliterated-Q4_K_M-GGUF --hf-file huihui-qwen3-30b-a3b-thinking-2507-abliterated-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo zehuajun/Huihui-Qwen3-30B-A3B-Thinking-2507-abliterated-Q4_K_M-GGUF --hf-file huihui-qwen3-30b-a3b-thinking-2507-abliterated-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo zehuajun/Huihui-Qwen3-30B-A3B-Thinking-2507-abliterated-Q4_K_M-GGUF --hf-file huihui-qwen3-30b-a3b-thinking-2507-abliterated-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo zehuajun/Huihui-Qwen3-30B-A3B-Thinking-2507-abliterated-Q4_K_M-GGUF --hf-file huihui-qwen3-30b-a3b-thinking-2507-abliterated-q4_k_m.gguf -c 2048
```
|
BinBashir/roberta_on_jumia_dataset
|
BinBashir
| 2025-08-18T13:57:48Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-18T13:57:25Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
chainway9/blockassist-bc-untamed_quick_eel_1755523556
|
chainway9
| 2025-08-18T13:54:47Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"untamed quick eel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T13:54:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- untamed quick eel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ghofrani/xls-r-1b-fa-cv8
|
ghofrani
| 2025-08-18T13:54:37Z | 56 | 2 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"fa",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- fa
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: common8
results: []
license: apache-2.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# common8
This model is a fine-tuned version of [wghts/checkpoint-20000](https://huggingface.co/wghts/checkpoint-20000) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - FA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3174
- Wer: 0.3022
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 6
- total_train_batch_size: 192
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 250.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:-----:|:---------------:|:------:|
| 3.5847 | 1.93 | 500 | 3.5104 | 1.0 |
| 2.7858 | 3.86 | 1000 | 2.9601 | 1.0001 |
| 1.6827 | 5.79 | 1500 | 0.7853 | 0.7030 |
| 1.4656 | 7.72 | 2000 | 0.6076 | 0.6014 |
| 1.3693 | 9.65 | 2500 | 0.5114 | 0.5307 |
| 1.379 | 11.58 | 3000 | 0.4666 | 0.4940 |
| 1.2832 | 13.51 | 3500 | 0.4257 | 0.4593 |
| 1.1931 | 15.44 | 4000 | 0.4039 | 0.4427 |
| 1.2911 | 17.37 | 4500 | 0.3956 | 0.4295 |
| 1.1577 | 19.3 | 5000 | 0.3705 | 0.4114 |
| 1.1135 | 21.24 | 5500 | 0.3740 | 0.4010 |
| 1.19 | 23.17 | 6000 | 0.3611 | 0.3935 |
| 1.1008 | 25.1 | 6500 | 0.3503 | 0.3880 |
| 1.0805 | 27.03 | 7000 | 0.3427 | 0.3781 |
| 1.1556 | 28.96 | 7500 | 0.3442 | 0.3727 |
| 1.0596 | 30.89 | 8000 | 0.3398 | 0.3646 |
| 1.0219 | 32.82 | 8500 | 0.3312 | 0.3660 |
| 1.1042 | 34.75 | 9000 | 0.3287 | 0.3612 |
| 1.0273 | 36.68 | 9500 | 0.3236 | 0.3556 |
| 1.0383 | 38.61 | 10000 | 0.3217 | 0.3558 |
| 1.0498 | 40.54 | 10500 | 0.3205 | 0.3520 |
| 0.9969 | 42.47 | 11000 | 0.3125 | 0.3504 |
| 1.0658 | 44.4 | 11500 | 0.3120 | 0.3493 |
| 0.992 | 46.33 | 12000 | 0.3137 | 0.3476 |
| 0.9737 | 48.26 | 12500 | 0.3085 | 0.3413 |
| 1.0817 | 50.19 | 13000 | 0.3091 | 0.3418 |
| 0.9414 | 52.12 | 13500 | 0.3072 | 0.3344 |
| 0.9295 | 54.05 | 14000 | 0.3039 | 0.3322 |
| 1.0248 | 55.98 | 14500 | 0.2991 | 0.3325 |
| 0.9474 | 57.91 | 15000 | 0.3032 | 0.3348 |
| 0.928 | 59.85 | 15500 | 0.2999 | 0.3285 |
| 1.0321 | 61.78 | 16000 | 0.2982 | 0.3253 |
| 0.9255 | 63.71 | 16500 | 0.2970 | 0.3231 |
| 0.8928 | 65.64 | 17000 | 0.2993 | 0.3250 |
| 1.008 | 67.57 | 17500 | 0.2985 | 0.3222 |
| 0.9371 | 69.5 | 18000 | 0.2968 | 0.3216 |
| 0.9077 | 71.43 | 18500 | 0.3011 | 0.3299 |
| 1.0044 | 73.36 | 19000 | 0.3053 | 0.3306 |
| 0.9625 | 75.29 | 19500 | 0.3159 | 0.3295 |
| 0.9816 | 77.22 | 20000 | 0.3080 | 0.3304 |
| 0.9587 | 119.19 | 20500 | 0.3088 | 0.3284 |
| 0.9178 | 122.09 | 21000 | 0.3132 | 0.3320 |
| 1.0282 | 125.0 | 21500 | 0.3099 | 0.3266 |
| 0.9337 | 127.9 | 22000 | 0.3110 | 0.3317 |
| 0.8822 | 130.81 | 22500 | 0.3037 | 0.3247 |
| 0.9644 | 133.72 | 23000 | 0.3037 | 0.3238 |
| 0.9214 | 136.62 | 23500 | 0.3040 | 0.3234 |
| 0.9167 | 139.53 | 24000 | 0.3079 | 0.3203 |
| 0.9047 | 142.44 | 24500 | 0.3018 | 0.3177 |
| 0.8909 | 145.35 | 25000 | 0.3053 | 0.3181 |
| 0.9646 | 148.25 | 25500 | 0.3095 | 0.3229 |
| 0.8802 | 151.16 | 26000 | 0.3111 | 0.3192 |
| 0.8411 | 154.07 | 26500 | 0.3068 | 0.3123 |
| 0.9235 | 156.97 | 27000 | 0.3090 | 0.3177 |
| 0.8943 | 159.88 | 27500 | 0.3115 | 0.3179 |
| 0.8854 | 162.79 | 28000 | 0.3052 | 0.3157 |
| 0.8734 | 165.69 | 28500 | 0.3077 | 0.3124 |
| 0.8515 | 168.6 | 29000 | 0.3117 | 0.3128 |
| 0.912 | 171.51 | 29500 | 0.3039 | 0.3121 |
| 0.8669 | 174.42 | 30000 | 0.3120 | 0.3123 |
| 0.823 | 177.32 | 30500 | 0.3148 | 0.3118 |
| 0.9129 | 180.23 | 31000 | 0.3179 | 0.3101 |
| 0.8255 | 183.14 | 31500 | 0.3164 | 0.3114 |
| 0.8948 | 186.05 | 32000 | 0.3128 | 0.3101 |
| 0.8397 | 188.95 | 32500 | 0.3143 | 0.3068 |
| 0.8341 | 191.86 | 33000 | 0.3127 | 0.3136 |
| 0.873 | 194.76 | 33500 | 0.3149 | 0.3124 |
| 0.8232 | 197.67 | 34000 | 0.3166 | 0.3086 |
| 0.8002 | 200.58 | 34500 | 0.3149 | 0.3061 |
| 0.8621 | 203.49 | 35000 | 0.3160 | 0.3093 |
| 0.8123 | 206.39 | 35500 | 0.3141 | 0.3063 |
| 0.7995 | 209.3 | 36000 | 0.3174 | 0.3075 |
| 0.8271 | 212.21 | 36500 | 0.3173 | 0.3043 |
| 0.8059 | 215.12 | 37000 | 0.3176 | 0.3079 |
| 0.8835 | 218.02 | 37500 | 0.3169 | 0.3062 |
| 0.8027 | 220.93 | 38000 | 0.3203 | 0.3098 |
| 0.775 | 223.83 | 38500 | 0.3159 | 0.3068 |
| 0.8487 | 226.74 | 39000 | 0.3161 | 0.3072 |
| 0.7929 | 229.65 | 39500 | 0.3143 | 0.3037 |
| 0.7653 | 232.56 | 40000 | 0.3160 | 0.3048 |
| 0.8211 | 235.46 | 40500 | 0.3173 | 0.3031 |
| 0.7761 | 238.37 | 41000 | 0.3176 | 0.3025 |
| 0.7761 | 241.28 | 41500 | 0.3179 | 0.3027 |
| 0.7903 | 244.19 | 42000 | 0.3181 | 0.3016 |
| 0.7807 | 247.09 | 42500 | 0.3170 | 0.3027 |
| 0.8406 | 250.0 | 43000 | 0.3174 | 0.3022 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2
- Datasets 1.18.3.dev0
- Tokenizers 0.10.3
|
mgazz/Prithvi-EO-2.0-300M-BurnScars
|
mgazz
| 2025-08-18T13:51:16Z | 10 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-14T14:05:43Z |
---
license: apache-2.0
---
|
mookiezi/Discord-Micae-Hermes-3-3B
|
mookiezi
| 2025-08-18T13:35:59Z | 1,764 | 2 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"causal-lm",
"instruct",
"chat",
"fine-tuned",
"merged-lora",
"llama-3",
"hermes",
"discord-dataset",
"conversational-ai",
"chatml",
"pytorch",
"open-weights",
"3b-parameters",
"conversational",
"dataset:mookiezi/Discord-OpenMicae",
"arxiv:2408.11857",
"base_model:NousResearch/Hermes-3-Llama-3.2-3B",
"base_model:finetune:NousResearch/Hermes-3-Llama-3.2-3B",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-02T23:21:01Z |
---
tags:
- transformers
- causal-lm
- text-generation
- instruct
- chat
- fine-tuned
- merged-lora
- llama-3
- hermes
- discord-dataset
- conversational-ai
- chatml
- pytorch
- open-weights
- 3b-parameters
model-index:
- name: Discord-Micae-Hermes-3-3B
results: []
base_model:
- NousResearch/Hermes-3-Llama-3.2-3B
datasets:
- mookiezi/Discord-OpenMicae
library_name: transformers
license: llama3
---
<div style="display: flex; align-items: center; gap: 8px;">
<span>Run this model on Google Colab for free:</span>
<a href="https://colab.research.google.com/drive/1kUtTeey5THhKW6f0BDKB9MFe4JIEew_Z?usp=sharing">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab"/>
</a>
</div>
## Discord-Micae-Hermes-3-3B

## Model Description
Discord-Micae-Hermes-3-3B is a new finetune on [NousResearch/Hermes-3-Llama-3.2-3B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.2-3B).
This model serves as a foundation for ongoing exploration into the capabilities of human-adjacent text generation.
- **Training Schedule:**
- 17M tokens of 260 thousand single-turn exchanges (STX) – 6 epochs @ 2e-5
- 5.5M tokens of 101 thousand multi-turn chains – 6 epochs @ 2e-5
- Combined dataset – 1 epoch @ 1e-5
- **Fine-Tuning Method:** LoRA merge (α = 32, r = 8, dropout = 0.1)
- **LoRA target modules:** q_proj, v_proj
- **Scheduler:** Cosine schedule with 220 warmup steps per phase
- **Optimizer:** torch.optim.AdamW (betas = (0.9, 0.999), amsgrad = False, eps = 1e-8, weight decay = 0.01)
- **Batching:** Effective size of 126 (7 batch size × 18 gradient accumulation steps)
- **Timeline and Hardware:** Training took place over 17 days on a GTX 1080 (8GB).
## Dataset
The model was fine-tuned on the [mookiezi/Discord-OpenMicae](https://huggingface.co/datasets/mookiezi/Discord-OpenMicae) dataset.
## Intended Use
- Conversational AI research
- Experimentation with dialogue agents trained on Discord data
- Chatbots requiring **casual, human-like tone**
### Limitations
- The model inherits potential biases from Discord-style language.
- It is **not safety-aligned** for deployment without moderation.
- While it does inhert knowledge from Hermes-3-3B it is not intended for factual or sensitive information retrieval.
## Prompting
Micae uses the same ChatML prompt format as Hermes 3 and handles context and chat history.
```
<|im_start|>user
what do you think about ai?<|im_end|>
<|im_start|>assistant
i'm not a fan of AI but I can understand why people are excited to use it. it's like the first time they got an electric car, or when they were able to fly in space, that excitement is real<|im_end|>
<|im_start|>user
where do you think it's heading?<|im_end|>
<|im_start|>assistant
it will be used for good and bad things just as with other techs<|im_end|>
<|im_start|>user
what do you do use it for personally?<|im_end|>
<|im_start|>assistant
nothing much yet, I hope to find some stuff later on.<|im_end|>
<|im_start|>user
do you like pokemon?<|im_end|>
<|im_start|>assistant
yes I love Pokémon.<|im_end|>
<|im_start|>user
what's your favorite pokemon<|im_end|>
<|im_start|>assistant
garchomp<|im_end|>
```
---
## Inferfacing
A python script for interfacing, optimized for casual, Discord-style conversation, is available at [https://github.com/mookiezi/interface](https://github.com/mookiezi/interface).
To use this model run:
```
python interface.py -m mookiezi/Discord-Micae-Hermes-3-3B
```

---
## License
See the [Meta LLaMA 3 Community License](https://ai.meta.com/llama/license/) for details.
## How to cite:
If you use this model in your work, please cite both **Discord-Micae-Hermes-3-3B** and the base model **Hermes 3**:
```bibtex
@misc{discord-micae-hermes3b,
title = {Discord-Micae-Hermes-3-3B},
author = {mookiezi},
year = {2025},
url={https://huggingface.co/mookiezi/Discord-Micae-Hermes-3-3B}
}
@misc{teknium2024hermes3technicalreport,
title={Hermes 3 Technical Report},
author={Ryan Teknium and Jeffrey Quesnelle and Chen Guang},
year={2024},
eprint={2408.11857},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2408.11857}
}
```
[](https://20000.online/micae)
[](https://20000.online/openmicae)
[](https://20000.online/discord-dialogues)
|
thanobidex/blockassist-bc-colorful_shiny_hare_1755522529
|
thanobidex
| 2025-08-18T13:34:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"colorful shiny hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T13:34:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- colorful shiny hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
pempekmangedd/blockassist-bc-patterned_sturdy_dolphin_1755522494
|
pempekmangedd
| 2025-08-18T13:33:42Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"patterned sturdy dolphin",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T13:33:38Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- patterned sturdy dolphin
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/InnoSpark-VPC-RM-32B-GGUF
|
mradermacher
| 2025-08-18T13:30:02Z | 156 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:sii-research/InnoSpark-HPC-RM-32B",
"base_model:quantized:sii-research/InnoSpark-HPC-RM-32B",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-07-21T12:05:02Z |
---
base_model: sii-research/InnoSpark-HPC-RM-32B
language:
- en
library_name: transformers
license: mit
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/sii-research/InnoSpark-HPC-RM-32B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#InnoSpark-VPC-RM-32B-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/InnoSpark-VPC-RM-32B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-VPC-RM-32B-GGUF/resolve/main/InnoSpark-VPC-RM-32B.Q2_K.gguf) | Q2_K | 12.4 | |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-VPC-RM-32B-GGUF/resolve/main/InnoSpark-VPC-RM-32B.Q3_K_S.gguf) | Q3_K_S | 14.5 | |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-VPC-RM-32B-GGUF/resolve/main/InnoSpark-VPC-RM-32B.Q3_K_M.gguf) | Q3_K_M | 16.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-VPC-RM-32B-GGUF/resolve/main/InnoSpark-VPC-RM-32B.Q3_K_L.gguf) | Q3_K_L | 17.3 | |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-VPC-RM-32B-GGUF/resolve/main/InnoSpark-VPC-RM-32B.IQ4_XS.gguf) | IQ4_XS | 18.0 | |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-VPC-RM-32B-GGUF/resolve/main/InnoSpark-VPC-RM-32B.Q4_K_S.gguf) | Q4_K_S | 18.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-VPC-RM-32B-GGUF/resolve/main/InnoSpark-VPC-RM-32B.Q4_K_M.gguf) | Q4_K_M | 20.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-VPC-RM-32B-GGUF/resolve/main/InnoSpark-VPC-RM-32B.Q5_K_S.gguf) | Q5_K_S | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-VPC-RM-32B-GGUF/resolve/main/InnoSpark-VPC-RM-32B.Q5_K_M.gguf) | Q5_K_M | 23.4 | |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-VPC-RM-32B-GGUF/resolve/main/InnoSpark-VPC-RM-32B.Q6_K.gguf) | Q6_K | 27.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-VPC-RM-32B-GGUF/resolve/main/InnoSpark-VPC-RM-32B.Q8_0.gguf) | Q8_0 | 34.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/InnoSpark-R-72B-0701-GGUF
|
mradermacher
| 2025-08-18T13:28:26Z | 129 | 1 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:sii-research/InnoSpark-R-72B-0701",
"base_model:quantized:sii-research/InnoSpark-R-72B-0701",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-07-22T09:53:04Z |
---
base_model: sii-research/InnoSpark-R-72B-0701
language:
- en
library_name: transformers
license: mit
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/sii-research/InnoSpark-R-72B-0701
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#InnoSpark-R-72B-0701-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/InnoSpark-R-72B-0701-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-R-72B-0701-GGUF/resolve/main/InnoSpark-R-72B-0701.Q2_K.gguf) | Q2_K | 29.9 | |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-R-72B-0701-GGUF/resolve/main/InnoSpark-R-72B-0701.Q3_K_S.gguf) | Q3_K_S | 34.6 | |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-R-72B-0701-GGUF/resolve/main/InnoSpark-R-72B-0701.Q3_K_M.gguf) | Q3_K_M | 37.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-R-72B-0701-GGUF/resolve/main/InnoSpark-R-72B-0701.Q3_K_L.gguf) | Q3_K_L | 39.6 | |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-R-72B-0701-GGUF/resolve/main/InnoSpark-R-72B-0701.IQ4_XS.gguf) | IQ4_XS | 40.3 | |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-R-72B-0701-GGUF/resolve/main/InnoSpark-R-72B-0701.Q4_K_S.gguf) | Q4_K_S | 44.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/InnoSpark-R-72B-0701-GGUF/resolve/main/InnoSpark-R-72B-0701.Q4_K_M.gguf) | Q4_K_M | 47.5 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/InnoSpark-R-72B-0701-GGUF/resolve/main/InnoSpark-R-72B-0701.Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/InnoSpark-R-72B-0701-GGUF/resolve/main/InnoSpark-R-72B-0701.Q5_K_S.gguf.part2of2) | Q5_K_S | 51.5 | |
| [PART 1](https://huggingface.co/mradermacher/InnoSpark-R-72B-0701-GGUF/resolve/main/InnoSpark-R-72B-0701.Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/InnoSpark-R-72B-0701-GGUF/resolve/main/InnoSpark-R-72B-0701.Q5_K_M.gguf.part2of2) | Q5_K_M | 54.5 | |
| [PART 1](https://huggingface.co/mradermacher/InnoSpark-R-72B-0701-GGUF/resolve/main/InnoSpark-R-72B-0701.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/InnoSpark-R-72B-0701-GGUF/resolve/main/InnoSpark-R-72B-0701.Q6_K.gguf.part2of2) | Q6_K | 64.4 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/InnoSpark-R-72B-0701-GGUF/resolve/main/InnoSpark-R-72B-0701.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/InnoSpark-R-72B-0701-GGUF/resolve/main/InnoSpark-R-72B-0701.Q8_0.gguf.part2of2) | Q8_0 | 77.4 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
ahhava/elyon
|
ahhava
| 2025-08-18T13:27:20Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-18T13:01:49Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: elyon
---
# Elyon
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `elyon` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "elyon",
"lora_weights": "https://huggingface.co/ahhava/elyon/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('ahhava/elyon', weight_name='lora.safetensors')
image = pipeline('elyon').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 30
## Contribute your own examples
You can use the [community tab](https://huggingface.co/ahhava/elyon/discussions) to add images that show off what you’ve made with this LoRA.
|
ciorant/checkpoints
|
ciorant
| 2025-08-18T13:23:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/bart-base",
"base_model:finetune:facebook/bart-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-18T13:22:45Z |
---
library_name: transformers
license: apache-2.0
base_model: facebook/bart-base
tags:
- generated_from_trainer
model-index:
- name: checkpoints
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# checkpoints
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.55.1
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
zekaemo/Indobert-Sentiment-Analysis-with-Bayes-Optimization-and-Weighted-Training
|
zekaemo
| 2025-08-18T13:22:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:indobenchmark/indobert-base-p2",
"base_model:finetune:indobenchmark/indobert-base-p2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-18T13:12:14Z |
---
library_name: transformers
license: mit
base_model: indobenchmark/indobert-base-p2
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: Indobert-Sentiment-Analysis-with-Bayes-Optimization-and-Weighted-Training
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Indobert-Sentiment-Analysis-with-Bayes-Optimization-and-Weighted-Training
This model is a fine-tuned version of [indobenchmark/indobert-base-p2](https://huggingface.co/indobenchmark/indobert-base-p2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8187
- Accuracy: 0.8105263157894737
- F1: 0.8086037151702786
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.6196 | 1.0 | 51 | 0.4970 | 0.7719 | 0.7780 |
| 0.4067 | 2.0 | 102 | 0.5441 | 0.7404 | 0.7495 |
| 0.2248 | 3.0 | 153 | 0.7342 | 0.7684 | 0.7669 |
| 0.1776 | 4.0 | 204 | 0.6930 | 0.8 | 0.8003 |
| 0.1137 | 5.0 | 255 | 1.1582 | 0.7789 | 0.7707 |
| 0.0868 | 6.0 | 306 | 1.1574 | 0.8 | 0.7983 |
| 0.0609 | 7.0 | 357 | 1.3369 | 0.7930 | 0.7871 |
| 0.0354 | 8.0 | 408 | 1.2317 | 0.8105 | 0.8086 |
| 0.0188 | 9.0 | 459 | 1.7317 | 0.8 | 0.7859 |
| 0.0127 | 10.0 | 510 | 1.6185 | 0.8035 | 0.8000 |
| 0.0155 | 11.0 | 561 | 1.7635 | 0.7965 | 0.7903 |
| 0.0106 | 12.0 | 612 | 1.8325 | 0.7965 | 0.7884 |
| 0.0106 | 13.0 | 663 | 1.8020 | 0.7930 | 0.7871 |
| 0.0101 | 14.0 | 714 | 1.8116 | 0.7930 | 0.7871 |
| 0.0105 | 15.0 | 765 | 1.8187 | 0.7930 | 0.7871 |
### Framework versions
- Transformers 4.55.1
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
halley-ai/gpt-oss-20b-MLX-6bit-gs32
|
halley-ai
| 2025-08-18T13:19:24Z | 0 | 1 |
mlx
|
[
"mlx",
"safetensors",
"gpt_oss",
"apple-silicon",
"metal",
"arm64",
"6-bit",
"group-size-32",
"moe",
"mpx4",
"openai",
"halley-ai",
"text-generation",
"conversational",
"en",
"ro",
"base_model:openai/gpt-oss-20b",
"base_model:quantized:openai/gpt-oss-20b",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-16T20:14:21Z |
---
library_name: mlx
pipeline_tag: text-generation
inference: false # MLX is macOS-only; HF Inference API won't run it
license: apache-2.0
base_model: openai/gpt-oss-20b
base_model_relation: quantized
language:
- en
- ro
tags:
- apple-silicon
- metal
- arm64
- 6-bit
- group-size-32
- moe
- mpx4
- openai
- halley-ai
---
# gpt-oss-20b — MLX 6-bit (group size 32)
**Summary.** This is a 6-bit (**Q6**) **MLX** quantization of **gpt-oss-20B** (sparse Mixture-of-Experts, MPx4). Group size is **32**.
Built for **Apple Silicon** with Metal acceleration.
- **Base model:** `openai/gpt-oss-20b` (Apache-2.0)
- **Quantization:** MLX Q6, `q_group_size=32` (some tensors remain FP16 for stability)
- **Files:** MLX weight shards + `config.json`; tokenizer files included for drop-in use
- **Footprint:** ~**18.38 GB** on disk
- **Intended use:** local inference / research on M-series Macs
- **Not intended for:** safety-critical decisions; outputs may be inaccurate or biased
## Requirements
**Runs on:** Apple Silicon (M1 or newer) with **macOS ≥ 13.5** via **MLX (Metal)**.
**Not supported:** Intel macOS / Linux / Windows (use a GGUF build + llama.cpp instead).
**RAM guidance:** 32 GB minimum for Q6 (gs=32). 24 GB MacBook Pro **won’t run it**. Extra RAM improves headroom.
## How to use (MLX)
```bash
pip install mlx-lm transformers
```
```python
# Python API (uses tokenizer bundled with this repo)
from mlx_lm import load, generate
model, tokenizer = load("halley-ai/gpt-oss-20b-MLX-6bit-gs32")
print(generate(
model, tokenizer,
prompt="Explain the Chudnovsky algorithm to compute π.",
max_tokens=256, max_kv_size=512
))
```
## Performance (Apple Silicon, real-world)
LM Studio / CLI (MLX, Q6 gs=32): ~49–55 tok/s, TTFB ~0.35–0.45 s (≈2k-token responses)
– measured on M1 Max 32 GB (short fixed-length runs show lower t/s due to startup overhead).
Throughput varies with Mac model, context, and sampler settings.
## Evaluation
Perplexity (PPL) streaming evaluation on WikiText-2; window=stride=4096, ~100k tokens, EOS inserted between docs.
<table>
<thead>
<tr><th>Variant</th><th>PPL (ctx=4096)</th></tr>
</thead>
<tbody>
<tr><td>MLX 8-bit (reference)</td><td>10.75</td></tr>
<tr><td><strong>MLX 6-bit (gs=32)</strong></td><td><strong>10.46 (−2.7% vs 8-bit/gs64)</strong></td></tr>
<tr><td>MLX 5-bit (gs=32)</td><td>11.11 (+3.3% vs 8-bit/gs64, +6.2% vs 6-bit/gs32)</strong></td></tr>
<tr><td>MLX 4-bit (gs=32)</td><td>13.70 (+27.4% vs 8-bit/gs64, +31.0% vs 6-bit/gs32)</td></tr>
</tbody>
</table>
**Interpretation**
- MLX 6-bit/gs32: Best of the group; edges out 8-bit/gs64 slightly at a smaller
footprint.
- MLX 5-bit/gs32: Small, consistent drop vs 6-bit/gs32 and 8-bit/gs64 (~3–6% PPL); strong “fits-16GB” option when GPU buffer limits matter.
- MLX 8-bit/gs64: Solid reference; near‑FP16 quality at a larger footprint.
- MLX 4-bit/gs32: Trades accuracy for footprint; use when RAM is constrained or throughput is the priority.
## Conversion details (provenance)
```bash
python -m mlx_lm convert \
--hf-path openai/gpt-oss-20b \
--mlx-path gpt-oss-20b-mlx-q6-gs32 \
--q-bits 6 --q-group-size 32 -q
```
- Some non-expert tensors (embeddings, norms, router) remain FP16.
## Sibling & reference models
- halley-ai/gpt-oss-20b-MLX-5bit-gs32
- halley-ai/gpt-oss-20b-MLX-4bit-gs32
- Reference (8-bit, upstream): lmstudio-community/gpt-oss-20b-MLX-8bit
## Limitations & biases
Outputs may be factually wrong or unsafe. Don’t use for medical, legal, or financial decisions without human review.
MoE models can be sensitive to prompt wording; prefer explicit instructions and structure.
## License & credits
- License: Apache-2.0 (inherits from base model)
- Base model: OpenAI gpt-oss-20B
- Quantization: Halley AI Lab (MLX Q6, gs=32)
- Please cite both the base model and this repository when you use the weights.
|
isbondarev/gemma_test
|
isbondarev
| 2025-08-18T13:06:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-18T13:05:07Z |
---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/Q3-IIJAN-3B-GGUF
|
mradermacher
| 2025-08-18T13:00:38Z | 1,156 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:djuna-test-lab/Q3-IIJAN-4B",
"base_model:quantized:djuna-test-lab/Q3-IIJAN-4B",
"endpoints_compatible",
"region:us"
] | null | 2025-08-12T18:02:47Z |
---
base_model: djuna-test-lab/Q3-IIJAN-4B
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/djuna-test-lab/Q3-IIJAN-4B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Q3-IIJAN-3B-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Q3-IIJAN-3B-GGUF/resolve/main/Q3-IIJAN-3B.Q2_K.gguf) | Q2_K | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/Q3-IIJAN-3B-GGUF/resolve/main/Q3-IIJAN-3B.Q3_K_S.gguf) | Q3_K_S | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/Q3-IIJAN-3B-GGUF/resolve/main/Q3-IIJAN-3B.Q3_K_M.gguf) | Q3_K_M | 2.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Q3-IIJAN-3B-GGUF/resolve/main/Q3-IIJAN-3B.Q3_K_L.gguf) | Q3_K_L | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Q3-IIJAN-3B-GGUF/resolve/main/Q3-IIJAN-3B.IQ4_XS.gguf) | IQ4_XS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Q3-IIJAN-3B-GGUF/resolve/main/Q3-IIJAN-3B.Q4_K_S.gguf) | Q4_K_S | 2.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Q3-IIJAN-3B-GGUF/resolve/main/Q3-IIJAN-3B.Q4_K_M.gguf) | Q4_K_M | 2.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Q3-IIJAN-3B-GGUF/resolve/main/Q3-IIJAN-3B.Q5_K_S.gguf) | Q5_K_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Q3-IIJAN-3B-GGUF/resolve/main/Q3-IIJAN-3B.Q5_K_M.gguf) | Q5_K_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Q3-IIJAN-3B-GGUF/resolve/main/Q3-IIJAN-3B.Q6_K.gguf) | Q6_K | 3.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Q3-IIJAN-3B-GGUF/resolve/main/Q3-IIJAN-3B.Q8_0.gguf) | Q8_0 | 4.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Q3-IIJAN-3B-GGUF/resolve/main/Q3-IIJAN-3B.f16.gguf) | f16 | 8.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
WasamiKirua/LFM2-1.2B-ProjectHuman-GGUF
|
WasamiKirua
| 2025-08-18T12:55:41Z | 0 | 0 | null |
[
"gguf",
"text-generation-inference",
"LFM2",
"unsloth",
"companionship",
"eq",
"samantha",
"her",
"en",
"dataset:WasamiKirua/Her-Samantha-Style",
"base_model:WasamiKirua/LFM2-1.2B-ProjectHuman",
"base_model:quantized:WasamiKirua/LFM2-1.2B-ProjectHuman",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-16T13:06:53Z |
---
license: apache-2.0
datasets:
- WasamiKirua/Her-Samantha-Style
language:
- en
base_model:
- WasamiKirua/LFM2-1.2B-ProjectHuman
tags:
- text-generation-inference
- LFM2
- unsloth
- companionship
- eq
- samantha
- her
---
|
WasamiKirua/gemma3-270M-ProjectHuman-GGUF
|
WasamiKirua
| 2025-08-18T12:52:23Z | 0 | 0 | null |
[
"gguf",
"gemma3-text",
"text-generation-inference",
"unsloth",
"samantha",
"her",
"companionship",
"text-generation",
"en",
"dataset:WasamiKirua/Her-Samantha-Style",
"base_model:WasamiKirua/gemma3-270M-ProjectHuman",
"base_model:quantized:WasamiKirua/gemma3-270M-ProjectHuman",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-08-16T10:09:36Z |
---
license: apache-2.0
datasets:
- WasamiKirua/Her-Samantha-Style
language:
- en
base_model:
- WasamiKirua/gemma3-270M-ProjectHuman
pipeline_tag: text-generation
tags:
- gemma3-text
- text-generation-inference
- unsloth
- samantha
- her
- companionship
---
|
koloni/blockassist-bc-deadly_graceful_stingray_1755519887
|
koloni
| 2025-08-18T12:52:10Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly graceful stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-18T12:52:05Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly graceful stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.