modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-13 18:26:42
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 558
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-13 18:25:20
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
kapalbalap/blockassist-bc-peaceful_wary_owl_1755687642
|
kapalbalap
| 2025-08-20T11:01:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"peaceful wary owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T11:01:34Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- peaceful wary owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
milliarderdol/blockassist-bc-roaring_rough_scorpion_1755685708
|
milliarderdol
| 2025-08-20T11:01:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"roaring rough scorpion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T11:01:08Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- roaring rough scorpion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1755687569
|
Ferdi3425
| 2025-08-20T11:00:45Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious deadly otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T11:00:19Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious deadly otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lc700x/dpt-dinov2-base-kitti
|
lc700x
| 2025-08-20T10:58:13Z | 0 | 0 | null |
[
"safetensors",
"dpt",
"vision",
"depth-estimation",
"dinov2",
"arxiv:2304.07193",
"arxiv:2103.13413",
"license:apache-2.0",
"region:us"
] |
depth-estimation
| 2025-08-20T03:01:42Z |
---
license: apache-2.0
tags:
- vision
- depth-estimation
- dinov2
inference: false
---
# Model Card: DPT model with DINOv2 backbone
## Model Details
DPT (Dense Prediction Transformer) model with DINOv2 backbone as proposed in [DINOv2: Learning Robust Visual Features without Supervision](https://arxiv.org/abs/2304.07193) by Oquab et al.
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/dpt_architecture.jpg"
alt="drawing" width="600"/>
<small> DPT architecture. Taken from the <a href="https://arxiv.org/abs/2103.13413" target="_blank">original paper</a>. </small>
### Resources
- [DINOv2 Paper](https://arxiv.org/abs/2304.07193)
- [DPT Paper](https://arxiv.org/abs/2103.13413)
### Use with Transformers
```python
from transformers import AutoImageProcessor, DPTForDepthEstimation
import torch
import numpy as np
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("facebook/dpt-dinov2-base-kitti")
model = DPTForDepthEstimation.from_pretrained("facebook/dpt-dinov2-base-kitti")
# prepare image for the model
inputs = image_processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
predicted_depth = outputs.predicted_depth
# interpolate to original size
prediction = torch.nn.functional.interpolate(
predicted_depth.unsqueeze(1),
size=image.size[::-1],
mode="bicubic",
align_corners=False,
)
# visualize the prediction
output = prediction.squeeze().cpu().numpy()
formatted = (output * 255 / np.max(output)).astype("uint8")
depth = Image.fromarray(formatted)
```
## Model Use
### Intended Use
The model is intended to showcase that using the DPT framework with DINOv2 as backbone yields a powerful depth estimator.
### BibTeX entry and citation info
```bibtex
@misc{oquab2023dinov2,
title={DINOv2: Learning Robust Visual Features without Supervision},
author={Maxime Oquab and Timothée Darcet and Théo Moutakanni and Huy Vo and Marc Szafraniec and Vasil Khalidov and Pierre Fernandez and Daniel Haziza and Francisco Massa and Alaaeldin El-Nouby and Mahmoud Assran and Nicolas Ballas and Wojciech Galuba and Russell Howes and Po-Yao Huang and Shang-Wen Li and Ishan Misra and Michael Rabbat and Vasu Sharma and Gabriel Synnaeve and Hu Xu and Hervé Jegou and Julien Mairal and Patrick Labatut and Armand Joulin and Piotr Bojanowski},
year={2023},
eprint={2304.07193},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
|
nema122/blockassist-bc-furry_rugged_camel_1755687417
|
nema122
| 2025-08-20T10:58:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"furry rugged camel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T10:57:58Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- furry rugged camel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
VoilaRaj/81_vI28Fw
|
VoilaRaj
| 2025-08-20T10:57:37Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-20T10:53:51Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
vrushank27/Qwen2-0.5B-GRPO-test
|
vrushank27
| 2025-08-20T10:56:20Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"grpo",
"trl",
"dataset:AI-MO/NuminaMath-TIR",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-17T10:42:29Z |
---
base_model: Qwen/Qwen2-0.5B-Instruct
datasets: AI-MO/NuminaMath-TIR
library_name: transformers
model_name: Qwen2-0.5B-GRPO-test
tags:
- generated_from_trainer
- grpo
- trl
licence: license
---
# Model Card for Qwen2-0.5B-GRPO-test
This model is a fine-tuned version of [Qwen/Qwen2-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct) on the [AI-MO/NuminaMath-TIR](https://huggingface.co/datasets/AI-MO/NuminaMath-TIR) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="vrushank27/Qwen2-0.5B-GRPO-test", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
kapalbalap/blockassist-bc-peaceful_wary_owl_1755687296
|
kapalbalap
| 2025-08-20T10:55:59Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"peaceful wary owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T10:55:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- peaceful wary owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Connexus/mt5-multitask-v1
|
Connexus
| 2025-08-20T10:55:54Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mt5",
"text2text-generation",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-20T10:51:50Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kapalbalap/blockassist-bc-peaceful_wary_owl_1755687134
|
kapalbalap
| 2025-08-20T10:53:03Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"peaceful wary owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T10:52:50Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- peaceful wary owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1755687071
|
Ferdi3425
| 2025-08-20T10:52:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious deadly otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T10:51:58Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious deadly otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Ver-filtrado-video-abigail-lalama-snayder/VER.filtrado.video.de.abigail.lalama.y.snayder.influencer.se.hace.viral.en.redes.sociales
|
Ver-filtrado-video-abigail-lalama-snayder
| 2025-08-20T10:50:30Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-20T10:50:19Z |
<a href="https://sdu.sk/AyL"><img src="https://files.qatarliving.com/event/2025/06/20/Jawan69_0-1749987397680.gif" alt="fsd" /></a>
<a href="https://sdu.sk/AyL" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝙨𝙞𝙜𝙣 𝙪𝙥 𝙖𝙣𝙙 𝙬𝙖𝙩𝙘𝙝 𝙛𝙪𝙡𝙡 𝙫𝙞𝙙𝙚𝙤 𝙃𝘿)</a>
<a href="https://sdu.sk/AyL" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤)</a>
|
kapalbalap/blockassist-bc-peaceful_wary_owl_1755686971
|
kapalbalap
| 2025-08-20T10:50:22Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"peaceful wary owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T10:50:08Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- peaceful wary owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
aralper18/blockassist-bc-gilded_tangled_albatross_1755686891
|
aralper18
| 2025-08-20T10:48:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gilded tangled albatross",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T10:48:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gilded tangled albatross
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ilkhom199/8790b2d7-a501-485e-a562-a07401e8f05a
|
ilkhom199
| 2025-08-20T10:47:31Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-20T10:46:36Z |
---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hdong0/deepseek-Llama-8B-Open-R1-GRPO_deepscaler_acc_mu_8_constant_lr_no_kl
|
hdong0
| 2025-08-20T10:47:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:agentica-org/DeepScaleR-Preview-Dataset",
"arxiv:2402.03300",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Llama-8B",
"base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Llama-8B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T20:55:11Z |
---
base_model: deepseek-ai/DeepSeek-R1-Distill-Llama-8B
datasets: agentica-org/DeepScaleR-Preview-Dataset
library_name: transformers
model_name: deepseek-Llama-8B-Open-R1-GRPO_deepscaler_acc_mu_8_constant_lr_no_kl
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for deepseek-Llama-8B-Open-R1-GRPO_deepscaler_acc_mu_8_constant_lr_no_kl
This model is a fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Llama-8B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B) on the [agentica-org/DeepScaleR-Preview-Dataset](https://huggingface.co/datasets/agentica-org/DeepScaleR-Preview-Dataset) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="hdong0/deepseek-Llama-8B-Open-R1-GRPO_deepscaler_acc_mu_8_constant_lr_no_kl", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.0.dev0
- Transformers: 4.52.0.dev0
- Pytorch: 2.6.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1755686740
|
Ferdi3425
| 2025-08-20T10:46:53Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious deadly otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T10:46:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious deadly otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Uppal-Farm-Girl-Viral-Video-Original-Link/Official.Uppal.Farm.Girl.Viral.Video.Original.Link
|
Uppal-Farm-Girl-Viral-Video-Original-Link
| 2025-08-20T10:43:18Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-20T10:42:40Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5xr5mb3e?leaked-videos/" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
hakimjustbao/blockassist-bc-raging_subtle_wasp_1755684849
|
hakimjustbao
| 2025-08-20T10:42:18Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"raging subtle wasp",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T10:42:14Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- raging subtle wasp
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
gauravvivek8/llama2finetune
|
gauravvivek8
| 2025-08-20T10:42:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-20T10:38:26Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kapalbalap/blockassist-bc-peaceful_wary_owl_1755686467
|
kapalbalap
| 2025-08-20T10:42:09Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"peaceful wary owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T10:41:56Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- peaceful wary owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Neurazum/bai-6-Emotion
|
Neurazum
| 2025-08-20T10:42:01Z | 0 | 1 |
keras
|
[
"keras",
"eeg",
"brain",
"deeplearning",
"artificialintelligence",
"ai",
"model",
"emotions",
"neuroscience",
"neura",
"neuro",
"bci",
"health",
"time-series-forecasting",
"en",
"tr",
"license:cc-by-nc-sa-4.0",
"region:us"
] |
time-series-forecasting
| 2025-08-16T19:13:49Z |
---
license: cc-by-nc-sa-4.0
language:
- en
- tr
tags:
- eeg
- brain
- deeplearning
- artificialintelligence
- ai
- model
- emotions
- neuroscience
- neura
- neuro
- bci
- health
pipeline_tag: time-series-forecasting
library_name: keras
---
# bai-6 Emotion (TR)
## Tanım
bai-6 Emotion modeli, EEG ve iEEG tarafından toplanan veriler ile eğitilen bir detaylı duygu sınıflandırma modelidir. Model, 6 kanallı bir EEG cihazıyla çalışabilir durumdadır.
## Hedef Kitle
bai modelleri, herkes için tasarlanmıştır. Açık kaynak versiyonları herkes tarafından kullanılabilir.
## Sınıflar
- Sakin
- Üzgün
- Kızgın
- Mutlu
## Neuramax
Neuramax-6 Gen1 ile tam uyumlu çalışmaktadır.
-------------------------------------------------------------------------
# bai-6 Emotion (EN)
## Definition
The bai-6 Emotion model is a detailed emotion classification model trained with data collected by EEG and iEEG. The model can work with a 6-channel EEG device.
## Target Audience
bai models are designed for everyone. Open source versions are available for everyone to use.
## Classes
- Calm
- Sad
- Angry
- Happy
## Neuramax
Fully compatible with Neuramax-6 Gen1.
-------------
<details>
<summary><strong>bai-6 Emotion v1</strong></summary>
# bai-6 Emotion v1 Yapısı / Structure
```bash
"model_summary":
"Model: Total params: 5,046 (19.71 KB)
Trainable params: 5,044 (19.70 KB)
Non-trainable params: 0 (0.00 B)
Optimizer params: 2 (12.00 B)",
"layers": [
{
"name": "dense",
"trainable": true,
"count_params": 2368
},
{
"name": "dropout",
"trainable": true,
"count_params": 0
},
{
"name": "dense_1",
"trainable": true,
"count_params": 2080
},
{
"name": "dropout_1",
"trainable": true,
"count_params": 0
},
{
"name": "dense_2",
"trainable": true,
"count_params": 528
},
{
"name": "dense_3",
"trainable": true,
"count_params": 68
}
]
```
# Kullanım / Usage
## 1. Sentetik Veri ile / With Synthetic Data
```python
import numpy as np
import matplotlib.pyplot as plt
import mne
from matplotlib.animation import FuncAnimation
from tensorflow.keras.models import load_model
import joblib
class EEGMonitor:
def __init__(self, model_path, scaler_path):
self.model = load_model(model_path)
self.scaler = joblib.load(scaler_path)
self.ch_names = ['T7', 'C3', 'Cz', 'C4', 'T8', 'Pz']
self.fs = 1000 # Örnekleme frekansı / Sampling frequency
self.buffer_size = 1000 # 1 saniyelik buffer / 1 second buffer
self.raw_buffer = np.zeros((6, self.buffer_size))
self.feature_contributions = {ch: [] for ch in self.ch_names}
# Elektrot pozisyonları (10-20 sistemi) / Electrode positions (10-20 system)
self.montage = mne.channels.make_standard_montage('standard_1020')
self.fig = plt.figure(figsize=(15, 10))
self.setup_plots()
def setup_plots(self):
self.ax1 = self.fig.add_subplot(223)
self.ax1.set_title("Canlı EEG Sinyalleri / Live EEG Signals")
self.ax1.set_xlabel("Zaman (ms) / Time (ms)")
self.ax1.set_ylabel("Amplitüd (µV) / Amplitude (µV)")
self.ax2 = self.fig.add_subplot(221)
self.ax2.set_title("Elektrot Konumları / Electrode Locations")
self.ax3 = self.fig.add_subplot(224)
self.ax3.set_title("Elektrot Katkı Oranları / Electrode Contribution Ratios")
self.ax3.set_ylim(0, 1)
self.ax4 = self.fig.add_subplot(222)
self.ax4.set_title("Duygu Tahmin Olasılıkları / Emotion Prediction Probabilities")
self.ax4.set_ylim(0, 1)
plt.tight_layout()
def generate_synthetic_data(self):
"""Sentetik EEG verisi üretir (6 kanal x 1000 örnek) / Generates synthetic EEG data (6 channels x 1000 samples)"""
noise = np.random.normal(0, 5e-6, (6, self.buffer_size))
t = np.linspace(0, 1, self.buffer_size)
noise[1] += 2e-6 * np.sin(2 * np.pi * 10 * t)
return noise
def update_buffer(self, new_data):
"""Buffer'ı kaydırmalı olarak günceller / Updates the buffer with new data by rolling"""
self.raw_buffer = np.roll(self.raw_buffer, -new_data.shape[1], axis=1)
self.raw_buffer[:, -new_data.shape[1]:] = new_data
def calculate_channel_contributions(self, features):
"""Her elektrotun tahmindeki katkısını hesaplar / Calculates the contribution of each electrode to the prediction"""
contributions = np.zeros(6)
for i in range(6):
channel_weights = self.model.layers[0].get_weights()[0][i * 6:(i + 1) * 6]
contributions[i] = np.mean(np.abs(channel_weights))
return contributions / np.sum(contributions)
def update_plot(self, frame):
new_data = self.generate_synthetic_data()
self.update_buffer(new_data)
features = self.extract_features(self.raw_buffer)
scaled_features = self.scaler.transform([features])
probs = self.model.predict(scaled_features, verbose=0)[0]
contributions = self.calculate_channel_contributions(features)
self.update_eeg_plot()
self.update_topomap()
self.update_contributions(contributions)
self.update_probabilities(probs)
def update_eeg_plot(self):
self.ax1.clear()
for i in range(6):
offset = i * 20e-6
self.ax1.plot(self.raw_buffer[i] + offset, label=self.ch_names[i])
self.ax1.legend(loc='upper right')
def update_topomap(self):
self.ax2.clear()
info = mne.create_info(self.ch_names, self.fs, 'eeg')
evoked = mne.EvokedArray(self.raw_buffer.mean(axis=1, keepdims=True), info)
evoked.set_montage(self.montage)
mne.viz.plot_topomap(evoked.data[:, 0], evoked.info, axes=self.ax2, show=False)
def update_contributions(self, contributions):
self.ax3.clear()
self.ax3.barh(self.ch_names, contributions, color='skyblue')
for i, v in enumerate(contributions):
self.ax3.text(v, i, f"{v * 100:.1f}%", color='black')
def update_probabilities(self, probs):
emotions = ['Mutlu / Happy', 'Kızgın / Angry', 'Üzgün / Sad', 'Sakin / Calm']
self.ax4.clear()
bars = self.ax4.barh(emotions, probs, color=['green', 'red', 'blue', 'purple'])
for bar in bars:
width = bar.get_width()
self.ax4.text(width, bar.get_y() + 0.2, f"{width * 100:.1f}%", ha='left')
def extract_features(self, data):
"""6 kanal için özellik çıkarımı / Feature extraction for 6 channels"""
features = []
for channel in data:
features.extend([
np.mean(channel),
np.std(channel),
np.ptp(channel),
np.sum(np.abs(np.diff(channel))),
np.median(channel),
np.percentile(np.abs(channel), 95)
])
return np.array(features)
def start_monitoring(self):
anim = FuncAnimation(self.fig, self.update_plot, interval=100)
plt.show()
if __name__ == "__main__":
monitor = EEGMonitor(
model_path='model/path/bai-6 Emotion.h5',
scaler_path='scaler/path/bai-6_scaler.save'
)
monitor.start_monitoring()
```
## 2. Veri Seti ile / With Dataset
```python
import numpy as np
import matplotlib.pyplot as plt
import mne
from matplotlib.animation import FuncAnimation
from tensorflow.keras.models import load_model
import joblib
import os
class EEGMonitor:
def __init__(self, model_path, scaler_path, data_path):
self.model = load_model(model_path)
self.scaler = joblib.load(scaler_path)
self.data_path = data_path
self.ch_names = ['T7', 'C3', 'Cz', 'C4', 'T8', 'Pz']
self.fs = 1000 # Örnekleme frekansı / Sampling frequency
self.buffer_size = 1000 # 1 saniyelik buffer / 1 second buffer
self.raw_buffer = np.zeros((6, self.buffer_size))
self.feature_contributions = {ch: [] for ch in self.ch_names}
# Elektrot pozisyonları / Electrode positions (10-20 system)
self.montage = mne.channels.make_standard_montage('standard_1020')
self.fig = plt.figure(figsize=(15, 10))
self.setup_plots()
self.dataset = self.load_dataset(self.data_path)
self.current_index = 0
def setup_plots(self):
self.ax1 = self.fig.add_subplot(223)
self.ax1.set_title("Canlı EEG Sinyalleri / Live EEG Signals")
self.ax1.set_xlabel("Zaman (ms) / Time (ms)")
self.ax1.set_ylabel("Amplitüd (µV) / Amplitude (µV)")
self.ax2 = self.fig.add_subplot(221)
self.ax2.set_title("Elektrot Konumları / Electrode Locations")
self.ax3 = self.fig.add_subplot(224)
self.ax3.set_title("Elektrot Katkı Oranları / Electrode Contribution Ratios")
self.ax3.set_ylim(0, 1)
self.ax4 = self.fig.add_subplot(222)
self.ax4.set_title("Duygu Tahmin Olasılıkları / Emotion Prediction Probabilities")
self.ax4.set_ylim(0, 1)
plt.tight_layout()
def load_dataset(self, path):
"""Desteklenen veri formatları: .npy (numpy), .csv / Supported data formats: .npy (numpy), .csv"""
if not os.path.exists(path):
raise FileNotFoundError(f"Veri seti bulunamadı / Not found dataset: {path}")
if path.endswith(".npy"):
data = np.load(path)
elif path.endswith(".csv"):
data = np.loadtxt(path, delimiter=',')
else:
raise ValueError("Desteklenmeyen dosya formatı. Yalnızca .npy veya .csv kullanılabilir. / Unsupported file format. Only .npy or .csv can be used.")
# Transpose gerekebilir: (n_channels, n_samples) / Transpose may be needed: (n_channels, n_samples)
if data.shape[0] != 6:
data = data.T
return data
def get_next_chunk(self):
"""Veri setinden buffer_size uzunluğunda bir parça alır / Gets a chunk of length buffer_size from the dataset"""
if self.current_index + self.buffer_size >= self.dataset.shape[1]:
self.current_index = 0
chunk = self.dataset[:, self.current_index:self.current_index + self.buffer_size]
self.current_index += self.buffer_size
return chunk
def update_buffer(self, new_data):
self.raw_buffer = np.roll(self.raw_buffer, -new_data.shape[1], axis=1)
self.raw_buffer[:, -new_data.shape[1]:] = new_data
def calculate_channel_contributions(self, features):
contributions = np.zeros(6)
for i in range(6):
channel_weights = self.model.layers[0].get_weights()[0][i * 6:(i + 1) * 6]
contributions[i] = np.mean(np.abs(channel_weights))
return contributions / np.sum(contributions)
def update_plot(self, frame):
new_data = self.get_next_chunk()
self.update_buffer(new_data)
features = self.extract_features(self.raw_buffer)
scaled_features = self.scaler.transform([features])
probs = self.model.predict(scaled_features, verbose=0)[0]
contributions = self.calculate_channel_contributions(features)
self.update_eeg_plot()
self.update_topomap()
self.update_contributions(contributions)
self.update_probabilities(probs)
def update_eeg_plot(self):
self.ax1.clear()
for i in range(6):
offset = i * 20e-6
self.ax1.plot(self.raw_buffer[i] + offset, label=self.ch_names[i])
self.ax1.legend(loc='upper right')
def update_topomap(self):
self.ax2.clear()
info = mne.create_info(self.ch_names, self.fs, 'eeg')
evoked = mne.EvokedArray(self.raw_buffer.mean(axis=1, keepdims=True), info)
evoked.set_montage(self.montage)
mne.viz.plot_topomap(evoked.data[:, 0], evoked.info, axes=self.ax2, show=False)
def update_contributions(self, contributions):
self.ax3.clear()
self.ax3.barh(self.ch_names, contributions, color='skyblue')
for i, v in enumerate(contributions):
self.ax3.text(v, i, f"{v * 100:.1f}%", color='black')
def update_probabilities(self, probs):
emotions = ['Mutlu / Happy', 'Kızgın / Angry', 'Üzgün / Sad', 'Sakin / Calm']
self.ax4.clear()
bars = self.ax4.barh(emotions, probs, color=['green', 'red', 'blue', 'purple'])
for bar in bars:
width = bar.get_width()
self.ax4.text(width, bar.get_y() + 0.2, f"{width * 100:.1f}%", ha='left')
def extract_features(self, data):
features = []
for channel in data:
features.extend([
np.mean(channel),
np.std(channel),
np.ptp(channel),
np.sum(np.abs(np.diff(channel))),
np.median(channel),
np.percentile(np.abs(channel), 95)
])
return np.array(features)
def start_monitoring(self):
anim = FuncAnimation(self.fig, self.update_plot, interval=1000)
plt.show()
if __name__ == "__main__":
monitor = EEGMonitor(
model_path="model/path/bai-6 Emotion.h5",
scaler_path="scaler/path/bai-6_scaler.save",
data_path="data/path/npy/or/csv"
)
monitor.start_monitoring()
```
</details>
<details>
<summary><strong>bai-6 Emotion v2</strong></summary>
# bai-6 Emotion v2 Yapısı / Structure
|Layer (type) | Output Shape | Param # |
| --- | --- | --- |
| dense_4 (Dense) | (None, 128) | 4,736 |
| batch_normalization_2 | (None, 128) | 512 |
| dropout_3 (Dropout) | (None, 128) | 0 |
| dense_5 (Dense) | (None, 64) | 8,256 |
| batch_normalization_3 | (None, 64) | 256 |
| dropout_4 (Dropout) | (None, 64) | 0 |
| dense_6 (Dense) | (None, 32) | 2,080 |
| dropout_5 (Dropout) | (None, 32) | 0 |
| dense_7 (Dense) | (None, 4) | 132 |
### Total params: 15,972 (62.39 KB)
**Trainable params: 15,588 (60.89 KB)**
**Non-trainable params: 384 (1.50 KB)**
# Kullanım / Usage
## Sentetik Veri ile / With Synthetic Data
```python
import numpy as np
import joblib
import time
import matplotlib.pyplot as plt
from matplotlib.animation import FuncAnimation
from tensorflow.keras.models import load_model
from datetime import datetime
import mne
import warnings
warnings.filterwarnings('ignore')
class EEGEmotionMonitorOptimized:
def __init__(self, model_path, scaler_path, selector_path=None, pca_path=None):
self.emotion_labels = {
0: "Mutlu (Happy)",
1: "Kızgın (Angry)",
2: "Üzgün (Sad)",
3: "Sakin (Calm)"
}
self.emotion_colors = ['#FFD700', '#FF4444', '#4169E1', '#32CD32']
# Kanal isimleri ve parametreler / Channel names and parameters
self.ch_names = ['T7', 'C3', 'Cz', 'C4', 'T8', 'Pz']
self.fs = 128 # Örnekleme hızı / Sampling rate
self.buffer_size = 640
self.update_interval = 200
# Buffer'ları başlat / Initialize buffers
self.raw_buffer = np.zeros((6, self.buffer_size))
self.prediction_history = []
self.confidence_history = []
self.time_history = []
self.max_history = 30
# Performance metrics tracking
self.performance_metrics = {
'total_predictions': 0,
'high_confidence_predictions': 0, # >0.8 confidence
'low_confidence_predictions': 0, # <0.5 confidence
'prediction_times': [], # Processing time per prediction
'emotion_transitions': 0, # Count of emotion changes
'stability_score': 0.0, # How stable predictions are
'average_confidence': 0.0,
'confidence_trend': [], # Last 10 confidence values for trend analysis
'processing_fps': 0.0, # Processing speed
'last_prediction': None
}
self.metrics_text = ""
self.start_time = None
try:
self.model = load_model(model_path)
self.scaler = joblib.load(scaler_path)
self.selector = None
self.pca = None
if selector_path:
try:
self.selector = joblib.load(selector_path)
print("Feature selector loaded")
except:
print("Feature selector not found, using raw features")
if pca_path:
try:
self.pca = joblib.load(pca_path)
print("PCA reducer loaded")
except:
print("PCA reducer not found, skipping dimensionality reduction")
print("Model and preprocessors successfully loaded!")
print(f"Model input shape: {self.model.input_shape}")
print(f"Output classes: {len(self.emotion_labels)}")
# Elektrot pozisyonları (10-20 sistemi) / Electrode positions (10-20 system)
self.montage = mne.channels.make_standard_montage('standard_1020')
except Exception as e:
print(f"Model/preprocessor loading error: {e}")
raise
self.fig = plt.figure(figsize=(14, 8))
self.fig.suptitle('EEG Duygu Tanıma Sistemi / EEG Emotion Analysis System', fontsize=16, fontweight='bold')
self.setup_plots()
self.animation = None
self.is_running = False
def setup_plots(self):
"""4 panelli görselleştirme arayüzünü hazırla (with performance metrics) / Setup 4-panel visualization interface (with performance metrics)"""
self.ax1 = self.fig.add_subplot(221)
self.ax1.set_title("Live EEG Signals", fontsize=10)
self.ax1.set_xlabel("Time (samples)", fontsize=9)
self.ax1.set_ylabel("Amplitude (µV)", fontsize=9)
self.ax1.grid(True, alpha=0.3)
self.ax2 = self.fig.add_subplot(222)
self.ax2.set_title("Emotion Probabilities", fontsize=10)
self.ax2.set_xlim(0, 1)
self.ax3 = self.fig.add_subplot(223)
self.ax3.set_title("Performance Metrics & Confidence Trend", fontsize=10)
self.ax4 = self.fig.add_subplot(224)
self.ax4.set_title("Electrode Contributions", fontsize=10)
self.ax4.set_xlim(0, 1)
plt.tight_layout(pad=1.0)
def generate_realistic_eeg_signal(self, emotion_bias=None):
noise = np.random.normal(0, 3e-6, (6, self.buffer_size))
t = np.linspace(0, self.buffer_size/self.fs, self.buffer_size)
alpha_freq = np.random.uniform(8, 12) # Alpha dominant
beta_freq = np.random.uniform(15, 25) # Beta
for ch in range(6):
if emotion_bias == 0: # Happy - higher beta
beta_amp = np.random.uniform(4e-6, 6e-6)
alpha_amp = np.random.uniform(2e-6, 3e-6)
elif emotion_bias == 1: # Angry - very high beta
beta_amp = np.random.uniform(5e-6, 7e-6)
alpha_amp = np.random.uniform(1e-6, 2e-6)
elif emotion_bias == 2: # Sad - lower activity
beta_amp = np.random.uniform(1e-6, 2e-6)
alpha_amp = np.random.uniform(3e-6, 5e-6)
elif emotion_bias == 3: # Calm - high alpha
beta_amp = np.random.uniform(1e-6, 3e-6)
alpha_amp = np.random.uniform(4e-6, 6e-6)
else: # Random
beta_amp = np.random.uniform(2e-6, 4e-6)
alpha_amp = np.random.uniform(2e-6, 4e-6)
noise[ch] += alpha_amp * np.sin(2 * np.pi * alpha_freq * t + np.random.random() * 2 * np.pi)
noise[ch] += beta_amp * np.sin(2 * np.pi * beta_freq * t + np.random.random() * 2 * np.pi)
return noise.astype(np.float32)
def update_buffer(self, new_data):
samples_to_add = min(new_data.shape[1], self.buffer_size // 4)
self.raw_buffer = np.roll(self.raw_buffer, -samples_to_add, axis=1)
self.raw_buffer[:, -samples_to_add:] = new_data[:, :samples_to_add]
def extract_lightweight_features(self, signal_data):
features = []
for channel_data in signal_data:
time_features = [
np.mean(channel_data),
np.std(channel_data),
np.ptp(channel_data),
np.median(channel_data),
np.mean(np.abs(channel_data)),
np.sqrt(np.mean(channel_data**2))
]
try:
fft_vals = np.abs(np.fft.rfft(channel_data[::4]))
freqs = np.fft.rfftfreq(len(channel_data)//4, 4/self.fs)
delta_power = np.sum(fft_vals[(freqs >= 0.5) & (freqs <= 4)])
theta_power = np.sum(fft_vals[(freqs >= 4) & (freqs <= 8)])
alpha_power = np.sum(fft_vals[(freqs >= 8) & (freqs <= 13)])
beta_power = np.sum(fft_vals[(freqs >= 13) & (freqs <= 30)])
total_power = np.sum(fft_vals) + 1e-10
freq_features = [
delta_power / total_power,
theta_power / total_power,
alpha_power / total_power,
beta_power / total_power
]
except:
freq_features = [0.25, 0.25, 0.25, 0.25]
nonlinear_features = [
np.std(np.diff(channel_data)) / (np.std(channel_data) + 1e-10),
np.mean(np.abs(np.diff(channel_data)))
]
channel_features = time_features + freq_features + nonlinear_features
features.extend(channel_features)
return np.array(features, dtype=np.float32)
def calculate_channel_contributions(self, signal_data):
contributions = np.zeros(6)
for i in range(6):
contributions[i] = np.sqrt(np.mean(signal_data[i]**2))
total = np.sum(contributions) + 1e-10
return contributions / total
def update_performance_metrics(self, predicted_class, confidence, processing_time):
metrics = self.performance_metrics
metrics['total_predictions'] += 1
if confidence > 0.8:
metrics['high_confidence_predictions'] += 1
elif confidence < 0.5:
metrics['low_confidence_predictions'] += 1
metrics['prediction_times'].append(processing_time)
if len(metrics['prediction_times']) > 50:
metrics['prediction_times'].pop(0)
if metrics['prediction_times']:
avg_time = np.mean(metrics['prediction_times'])
metrics['processing_fps'] = 1.0 / max(avg_time, 0.001)
if metrics['last_prediction'] is not None and metrics['last_prediction'] != predicted_class:
metrics['emotion_transitions'] += 1
metrics['last_prediction'] = predicted_class
metrics['confidence_trend'].append(float(confidence))
if len(metrics['confidence_trend']) > 10:
metrics['confidence_trend'].pop(0)
if self.confidence_history:
metrics['average_confidence'] = np.mean(self.confidence_history)
if metrics['total_predictions'] > 1:
transition_rate = metrics['emotion_transitions'] / metrics['total_predictions']
metrics['stability_score'] = max(0, 1.0 - transition_rate)
def update_plot(self, frame):
if not self.is_running:
return
start_time = time.time()
if frame % 2 == 0:
if np.random.random() < 0.2:
emotion_bias = np.random.randint(0, 4)
else:
emotion_bias = None
new_samples = self.buffer_size // 8
new_data = self.generate_realistic_eeg_signal(emotion_bias)[:, :new_samples]
self.update_buffer(new_data)
prediction_start = time.time()
features = self.extract_lightweight_features(self.raw_buffer)
try:
scaled_features = self.scaler.transform([features])
if self.selector is not None:
scaled_features = self.selector.transform(scaled_features)
if self.pca is not None:
scaled_features = self.pca.transform(scaled_features)
probs = self.model.predict(scaled_features, verbose=0)[0]
predicted_class = np.argmax(probs)
confidence = np.max(probs)
except Exception as e:
print(f"Prediction error: {e}")
probs = np.array([0.25, 0.25, 0.25, 0.25])
predicted_class = 0
confidence = 0.25
prediction_time = time.time() - prediction_start
self.update_performance_metrics(predicted_class, confidence, prediction_time)
self.prediction_history.append(predicted_class)
self.confidence_history.append(confidence)
self.time_history.append(datetime.now())
if len(self.prediction_history) > self.max_history:
self.prediction_history.pop(0)
self.confidence_history.pop(0)
self.time_history.pop(0)
contributions = self.calculate_channel_contributions(self.raw_buffer)
self.update_eeg_plot()
self.update_probabilities(probs)
self.update_performance_plot()
self.update_contributions(contributions)
emotion_name = self.emotion_labels[predicted_class]
metrics = self.performance_metrics
elapsed_time = time.time() - self.start_time if self.start_time else 0
print(f"\r{datetime.now().strftime('%H:%M:%S')} | "
f"Emotion: {emotion_name} | "
f"Conf: {confidence:.3f} | "
f"FPS: {metrics['processing_fps']:.1f} | "
f"Stab: {metrics['stability_score']:.2f} | "
f"Total: {metrics['total_predictions']} | "
f"Time: {elapsed_time:.0f}s", end='')
def update_eeg_plot(self):
self.ax1.clear()
colors = plt.cm.tab10(np.linspace(0, 1, 6))
display_samples = min(300, self.buffer_size) # Show fewer samples for performance
for i in range(6):
offset = i * 20e-6
signal = self.raw_buffer[i, -display_samples:] + offset
self.ax1.plot(signal, label=self.ch_names[i],
color=colors[i], linewidth=1.0, alpha=0.8)
self.ax1.set_title("Live EEG Signals", fontsize=12)
self.ax1.set_xlabel("Time (samples)")
self.ax1.set_ylabel("Amplitude (µV)")
self.ax1.legend(loc='upper right', fontsize=8)
self.ax1.grid(True, alpha=0.3)
def update_performance_plot(self):
self.ax3.clear()
metrics = self.performance_metrics
if metrics['total_predictions'] > 0:
high_conf_pct = (metrics['high_confidence_predictions'] / metrics['total_predictions']) * 100
low_conf_pct = (metrics['low_confidence_predictions'] / metrics['total_predictions']) * 100
metrics_text = f"""PERFORMANCE METRICS
Total Predictions: {metrics['total_predictions']}
Average Confidence: {metrics['average_confidence']:.3f}
High Confidence (>0.8): {high_conf_pct:.1f}%
Low Confidence (<0.5): {low_conf_pct:.1f}%
Processing Speed: {metrics['processing_fps']:.1f} FPS
Stability Score: {metrics['stability_score']:.3f}
Emotion Transitions: {metrics['emotion_transitions']}
Model Accuracy: {high_conf_pct:.1f}%
Response Time: {np.mean(metrics['prediction_times'])*1000:.1f}ms"""
self.ax3.text(0.02, 0.98, metrics_text,
transform=self.ax3.transAxes,
fontsize=8, verticalalignment='top',
fontfamily='monospace',
bbox=dict(boxstyle="round,pad=0.3", facecolor="lightblue", alpha=0.7))
if len(metrics['confidence_trend']) > 1:
trend_x = np.arange(len(metrics['confidence_trend']))
trend_data = np.array(metrics['confidence_trend'], dtype=np.float64)
self.ax3.plot(trend_x + 0.6, trend_data * 0.4 + 0.1,
'g-o', markersize=3, linewidth=2, label='Confidence Trend')
# Add trend analysis
if len(metrics['confidence_trend']) > 3:
try:
x_data = np.array(range(len(trend_data[-5:])), dtype=np.float64)
y_data = np.array(trend_data[-5:], dtype=np.float64)
recent_trend = np.polyfit(x_data, y_data, 1)[0]
trend_direction = "↗" if recent_trend > 0.01 else "↘" if recent_trend < -0.01 else "→"
self.ax3.text(0.7, 0.9, f"Trend: {trend_direction}",
transform=self.ax3.transAxes, fontsize=10, fontweight='bold')
except:
# Fallback if polyfit fails
self.ax3.text(0.7, 0.9, f"Trend: →",
transform=self.ax3.transAxes, fontsize=10, fontweight='bold')
self.ax3.set_xlim(0, 1)
self.ax3.set_ylim(0, 1)
self.ax3.set_title("Performance Metrics & Confidence Trend", fontsize=10)
if metrics['average_confidence'] > 0.8:
title_color = 'green'
elif metrics['average_confidence'] > 0.6:
title_color = 'orange'
else:
title_color = 'red'
self.ax3.title.set_color(title_color)
def update_contributions(self, contributions):
self.ax4.clear()
colors = plt.cm.viridis(contributions)
bars = self.ax4.barh(self.ch_names, contributions, color=colors)
for i, (bar, v) in enumerate(zip(bars, contributions)):
if v > 0.05:
self.ax4.text(v + 0.02, i, f"{v*100:.1f}%",
va='center', fontsize=9)
self.ax4.set_title("Electrode Contributions", fontsize=10)
self.ax4.set_xlabel("Contribution Rate", fontsize=9)
self.ax4.set_xlim(0, 0.6)
self.ax4.grid(True, alpha=0.3, axis='x')
def update_probabilities(self, probs):
self.ax2.clear()
emotions = [self.emotion_labels[i] for i in range(4)]
bars = self.ax2.barh(emotions, probs, color=self.emotion_colors)
max_idx = np.argmax(probs)
bars[max_idx].set_edgecolor('black')
bars[max_idx].set_linewidth(2)
for bar, prob in zip(bars, probs):
width = bar.get_width()
if width > 0.05:
self.ax2.text(width + 0.02, bar.get_y() + bar.get_height()/2,
f"{width*100:.1f}%", ha='left', va='center',
fontsize=9, fontweight='bold')
self.ax2.set_title("Emotion Probabilities", fontsize=12)
self.ax2.set_xlabel("Probability")
self.ax2.set_xlim(0, 1)
self.ax2.grid(True, alpha=0.3, axis='x')
current_emotion = emotions[max_idx]
confidence = probs[max_idx]
self.ax2.text(0.5, 1.05, f"Current: {current_emotion} ({confidence*100:.1f}%)",
transform=self.ax2.transAxes, ha='center',
fontsize=10, fontweight='bold', color=self.emotion_colors[max_idx])
def start_monitoring(self):
print("\n" + "="*60)
print(" OPTIMIZED EEG EMOTION RECOGNITION MONITOR")
print("="*60)
print("\nPress 'X' to close the window...")
print("Real-time performance metrics will be displayed")
print("-"*60)
self.is_running = True
self.start_time = time.time()
self.animation = FuncAnimation(
self.fig,
self.update_plot,
interval=self.update_interval,
blit=False,
cache_frame_data=False
)
plt.show()
self.is_running = False
print("\n\nMonitoring stopped.")
if self.prediction_history:
self.print_summary_statistics()
def print_summary_statistics(self):
print("\n" + "="*80)
print(" DETAILED PERFORMANCE & STATISTICS SUMMARY")
print("="*80)
if not self.prediction_history:
print("No data collected.")
return
metrics = self.performance_metrics
total_time = time.time() - self.start_time if self.start_time else 0
print("\n📊 PERFORMANCE METRICS:")
print(f" Total Predictions: {metrics['total_predictions']}")
print(f" Total Runtime: {total_time:.1f} seconds")
print(f" Average Processing Speed: {metrics['processing_fps']:.1f} FPS")
print(f" Average Response Time: {np.mean(metrics['prediction_times'])*1000:.1f}ms")
print(f" Model Stability Score: {metrics['stability_score']:.3f} (0-1, higher=better)")
print(f" Emotion Transitions: {metrics['emotion_transitions']}")
print(f"\n🎯 CONFIDENCE ANALYSIS:")
total = len(self.prediction_history)
high_conf_count = metrics['high_confidence_predictions']
low_conf_count = metrics['low_confidence_predictions']
medium_conf_count = max(0, total - high_conf_count - low_conf_count)
print(f" Average Confidence: {metrics['average_confidence']:.3f}")
print(f" Confidence Std Dev: {np.std(self.confidence_history):.3f}")
print(f" High Confidence (>0.8): {high_conf_count} ({high_conf_count/total*100:.1f}%)")
print(f" Medium Confidence (0.5-0.8): {medium_conf_count} ({medium_conf_count/total*100:.1f}%)")
print(f" Low Confidence (<0.5): {low_conf_count} ({low_conf_count/total*100:.1f}%)")
accuracy_score = high_conf_count / total * 100 if total > 0 else 0
print(f"\n🏆 MODEL QUALITY ASSESSMENT:")
print(f" Estimated Accuracy: {accuracy_score:.1f}% (based on high confidence predictions)")
if accuracy_score >= 80:
quality = "EXCELLENT 🌟"
elif accuracy_score >= 70:
quality = "GOOD ✅"
elif accuracy_score >= 60:
quality = "FAIR ⚠️"
else:
quality = "POOR ❌"
print(f" Model Quality Rating: {quality}")
emotion_counts = {i: 0 for i in range(4)}
for pred in self.prediction_history:
emotion_counts[pred] += 1
print(f"\n😊 EMOTION DISTRIBUTION:")
for emotion_id, count in emotion_counts.items():
percentage = (count / total) * 100
bar = "█" * int(percentage / 5)
print(f" {self.emotion_labels[emotion_id]:<15}: {count:>3} ({percentage:>5.1f}%) {bar}")
dominant_emotion = max(emotion_counts, key=emotion_counts.get)
dominant_percentage = emotion_counts[dominant_emotion] / total * 100
print(f"\n Dominant Emotion: {self.emotion_labels[dominant_emotion]} ({dominant_percentage:.1f}%)")
if len(metrics['confidence_trend']) > 3:
try:
x_data = np.array(range(len(metrics['confidence_trend'])), dtype=np.float64)
y_data = np.array(metrics['confidence_trend'], dtype=np.float64)
trend_slope = np.polyfit(x_data, y_data, 1)[0]
print(f"\n📈 TREND ANALYSIS:")
if trend_slope > 0.01:
trend_desc = "IMPROVING ↗"
elif trend_slope < -0.01:
trend_desc = "DECLINING ↘"
else:
trend_desc = "STABLE →"
print(f" Recent Confidence Trend: {trend_desc} (slope: {trend_slope:.4f})")
except Exception as e:
print(f"\n📈 TREND ANALYSIS:")
print(f" Recent Confidence Trend: STABLE → (analysis unavailable)")
print(f"\n💡 RECOMMENDATIONS:")
if accuracy_score < 70:
print(" • Consider retraining the model with more data")
print(" • Check data quality and preprocessing steps")
if metrics['stability_score'] < 0.7:
print(" • Model predictions are unstable - review signal quality")
if metrics['processing_fps'] < 5:
print(" • Processing speed is slow - consider model optimization")
if accuracy_score >= 80 and metrics['stability_score'] >= 0.8:
print(" • Model performance is excellent! ✨")
print("\n" + "="*80)
def main():
model_path = 'path/to/bai-6 EmotionOptimized.h5'
scaler_path = 'path/to/bai-6 ScalerOptimized.pkl'
selector_path = 'path/to/bai-6_feature_selector_opt.pkl'
pca_path = 'path/to/bai-6_pca_reducer_opt.pkl'
try:
monitor = EEGEmotionMonitorOptimized(
model_path, scaler_path, selector_path, pca_path
)
monitor.start_monitoring()
except FileNotFoundError as e:
print(f"Model or preprocessor file not found: {e}")
print("Please ensure the model has been trained and saved.")
print("Available fallback: Using basic model without feature selection/PCA")
try:
basic_model_path = 'path/to/bai-6 EmotionOptimized.h5'
basic_scaler_path = 'path/to/bai-6 ScalerOptimized.pkl'
monitor = EEGEmotionMonitorOptimized(basic_model_path, basic_scaler_path)
monitor.start_monitoring()
except Exception as e2:
print(f"Fallback also failed: {e2}")
except Exception as e:
print(f"Error: {e}")
import traceback
traceback.print_exc()
if __name__ == "__main__":
main()
```
</details>
-------------
## Lisans/License
CC-BY-NC-SA-4.0
|
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1755686444
|
Ferdi3425
| 2025-08-20T10:41:59Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious deadly otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T10:41:32Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious deadly otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lautan/blockassist-bc-gentle_patterned_goat_1755684882
|
lautan
| 2025-08-20T10:41:47Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gentle patterned goat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T10:41:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gentle patterned goat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
harjinder-kaur-uppal-viral-video/New.full.videos.harjinder.kaur.uppal.Viral.Video.Official.Tutorial
|
harjinder-kaur-uppal-viral-video
| 2025-08-20T10:41:06Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-20T10:40:43Z |
<a href="https://sdu.sk/AyL"><img src="https://files.qatarliving.com/event/2025/06/20/Jawan69_0-1749987397680.gif" alt="fsd" /></a>
<a href="https://sdu.sk/AyL" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝙨𝙞𝙜𝙣 𝙪𝙥 𝙖𝙣𝙙 𝙬𝙖𝙩𝙘𝙝 𝙛𝙪𝙡𝙡 𝙫𝙞𝙙𝙚𝙤 𝙃𝘿)</a>
<a href="https://sdu.sk/AyL" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤)</a>
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1755685231
|
Sayemahsjn
| 2025-08-20T10:39:14Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T10:39:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
trunghieuma22/finetuned_model
|
trunghieuma22
| 2025-08-20T10:39:13Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gpt_oss",
"trl",
"en",
"base_model:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-20T10:39:05Z |
---
base_model: unsloth/gpt-oss-20b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gpt_oss
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** trunghieuma22
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gpt-oss-20b-unsloth-bnb-4bit
This gpt_oss model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Aady14/lora-model
|
Aady14
| 2025-08-20T10:38:28Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-20T10:38:21Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
VIDEOS-18-brown-girl-viral-video-Clip/New.full.videos.brown.girl.Viral.Video.Official.Tutorial
|
VIDEOS-18-brown-girl-viral-video-Clip
| 2025-08-20T10:37:48Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-20T10:37:36Z |
<a href="https://sdu.sk/AyL"><img src="https://files.qatarliving.com/event/2025/06/20/Jawan69_0-1749987397680.gif" alt="fsd" /></a>
<a href="https://sdu.sk/AyL" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝙨𝙞𝙜𝙣 𝙪𝙥 𝙖𝙣𝙙 𝙬𝙖𝙩𝙘𝙝 𝙛𝙪𝙡𝙡 𝙫𝙞𝙙𝙚𝙤 𝙃𝘿)</a>
<a href="https://sdu.sk/AyL" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤)</a>
|
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1755686164
|
Ferdi3425
| 2025-08-20T10:37:18Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious deadly otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T10:36:52Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious deadly otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Vanbitcase/2bfull
|
Vanbitcase
| 2025-08-20T10:36:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_vl",
"image-to-text",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/Qwen2-VL-2B-Instruct",
"base_model:finetune:unsloth/Qwen2-VL-2B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-08-20T10:35:09Z |
---
base_model: unsloth/Qwen2-VL-2B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2_vl
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Vanbitcase
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2-VL-2B-Instruct
This qwen2_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
xinnn32/blockassist-bc-meek_winged_caterpillar_1755686136
|
xinnn32
| 2025-08-20T10:36:09Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"meek winged caterpillar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T10:35:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- meek winged caterpillar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lisaozill03/blockassist-bc-rugged_prickly_alpaca_1755684646
|
lisaozill03
| 2025-08-20T10:35:41Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rugged prickly alpaca",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T10:35:38Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rugged prickly alpaca
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
amaye15/autoencoder-robust-demo
|
amaye15
| 2025-08-20T10:33:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"autoencoder",
"feature-extraction",
"generated_from_trainer",
"custom_code",
"region:us"
] |
feature-extraction
| 2025-08-20T10:17:58Z |
---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: autoencoder-robust-demo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# autoencoder-robust-demo
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch_fused with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 3 | 1.5616 |
### Framework versions
- Transformers 4.55.2
- Pytorch 2.8.0
- Datasets 4.0.0
- Tokenizers 0.21.4
|
kapalbalap/blockassist-bc-peaceful_wary_owl_1755685914
|
kapalbalap
| 2025-08-20T10:32:45Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"peaceful wary owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T10:32:30Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- peaceful wary owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
xinnn32/blockassist-bc-meek_winged_caterpillar_1755685839
|
xinnn32
| 2025-08-20T10:31:12Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"meek winged caterpillar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T10:31:02Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- meek winged caterpillar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
meetween/Llama-speechlmm-1.0-s
|
meetween
| 2025-08-20T10:30:25Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llava",
"feature-extraction",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-03-01T11:09:40Z |
---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: Llama-speechlmm-1.0-s
results: []
---
## Model information
The SpeechLMM 1.0 collection of multimodal and multilingual large language models is a collection of instruction-tuned generative models in 4 different sizes: S (2B), M (4B), L (9B) and XL (71B), supporting text, audio and video as input and only text as output. The SpeechLMM 1.0 models are optimized for various X-to-text generation tasks, namely:
- Machine Translation
- Automatic Speech Recognition
- Speech Translation
- Speech Summarization
- Spoken Question Answering
- Spoken Language Understanding (beta)
- Visual Speech Recognition (beta)
**Model Developer:** Meetween consortium
**Supported Languages:** English, French, Italian, German, and Spanish are officially supported (for a subset of the supported tasks). The Llama 3.X backbone and the SeamlessM4T v2 audio encoder have been trained on a broader collection of languages than these 5 supported languages, so the model might exhibit good performance on other languages too.
**Model Release Date:** Feb 28, 2025
**License:** see [LICENSE](LICENSE)
### Model Architecture
SpeechLMM 1.0 an auto-regressive multimodal language model based on a Llama 3.X backbone (X varies with the model size), a speech-specific stack consisting of a pre-trained audio encoder ([SeamlessM4T v2](https://ai.meta.com/research/publications/seamless-multilingual-expressive-and-streaming-speech-translation/)) and an audio adapter, and a video-specific stack consisting of a pre-trained video encoder ([Auto-AVSR](https://ieeexplore.ieee.org/document/10096889)) and a video adapter.
<!-- TODO: add the image of the model architecture here -->
| Model | Params | Input modalities | Output modalities | Context Length |
|:---------------- |:----------- |:------------------------------------------ |:----------------- |:-------------- |
| SpeechLMM 1.0 S | 2B (2.17B) | Multilingual text and audio, English video | Multilingual Text | 128k |
| SpeechLMM 1.0 M | 4B (4.15B) | Multilingual text and audio, English video | Multilingual Text | 128k |
| SpeechLMM 1.0 L | 9B (8.98B) | Multilingual text and audio, English video | Multilingual Text | 128k |
| SpeechLMM 1.0 XL (beta) | 71B (71.5B) | Multilingual text and audio, English video | Multilingual Text | 128k |
#### Audio and video encoders
For all the 4 sizes of SpeechLMM 1.0, the audio encoder is **SeamlessM4T v2 Large** (`facebook/seamless-m4t-v2-large`) and the video encoder is **Auto-AVSR** (`vsr_trlrs3vox2_base`).
#### Audio and video adapters
For all the 4 sizes of SpeechLMM 1.0, the audio and video adapters are:
| Modality | Architecture | Number of layers | Compression factor |
| :------- | :----------- | :--------------- | :----------------- |
| Audio | MLP | 4 | 1 |
| Video | Window-level Q-former <br> (4 queries) | 4 | 4 |
#### LLM backbone
| Model | Backbone |
|:---------------- |:---------------------- |
| SpeechLMM 1.0 S | Llama 3.2 1B Instruct |
| SpeechLMM 1.0 M | Llama 3.2 3B Instruct |
| SpeechLMM 1.0 L | Llama 3.1 8B Instruct |
| SpeechLMM 1.0 XL (beta) | Llama 3.3 70B Instruct |
## How to use
Currently, this model can only be used via our [`speechlmm`](https://github.com/meetween/speechlmm) codebase. Refer to the instructions there for more details.
Important: before you can use this model, you must download the SeamlessM4T v2 speech encoder and the Auto-AVSR video encoder by following the instructions provided in the README of the above repo. Please note that by doing so, you agree with their respective license terms.
## Training Data
### Monolingual
| TASK | Task name | Dataset | Language | License |
| -------- | ---------------------------- | ------------------ | -------- | ------------------------------------------ |
| **ASR** | Automatic Speech Recognition | **LibriHeavy** | en | CC-BY-4.0 |
| | | **LibriTTS** | en | CC BY 4.0 |
| | | **AMI** | en | CC-BY-4.0 |
| | | **ICSI** | en | CC-BY-4.0 |
| **LIPREAD** | Visual Speech Recognition | **LRS2-BBC** | en | Custom |
| **SSUM** | Speech Summarization | **AMI** | en | CC-BY-4.0 |
| | | **ICSI** | en | CC-BY-4.0 |
| **SQA** | Spoken Question Answering | **Spoken SQUAD** | en | CC-BY-SA-4.0 |
| **SLU** | Spoken Language Understanding| **SLURP** | en | CC BY 4.0 (text) <br> CC BY-NC 4.0 (audio) |
### Multilingual
| TASK | Task name | Dataset | Language | License |
| ---------------- | ----------------------------- | ------------------------------------ | ------------------------------------------- | ------------------------------------------ |
| **ASR** | Automatic Speech Recognition | **CoVoST2** | en, fr, it, de, es | CC0 |
| | | **CommonVoice** | en, fr, it, de, es | Apache-2.0 |
| **ST** | Speech-to-text Translation | **CoVoST2** | en → de, {fr, it, de, es} → en | CC0 |
| | | **EuroParl-ST** | {en, fr, it, de, es} → {en, fr, it, de, es} | CC-BY-NC-4.0 |
| **MT** | Machine Translation | **EuroParl-ST** | {en, fr, it, de, es} → {en, fr, it, de, es} | CC-BY-NC-4.0 |
| **TextInstruct** | Text Instruction Following | **Everything_Instruct_Multilingual** | en, fr, it, de, es, ru, zh, ko, ur, la, ar,<br>hi, ja, nl, pt | Apache-2.0 |
| **SLU** | Spoken Language Understanding | **Speech-Massive** | fr, de | CC-BY-NC-SA-4.0 |
## Evaluation Results
The following results specifically refer to the S model.
### ASR Metrics
| Dataset | Language | WER ⬇ |
|:----------|:-----------|------:|
| **MUSTC** | en | 19.2 |
| **MTEDX** | it | 29.43 |
| **MTEDX** | fr | 28.97 |
| **ACL6060** | en | 19.4 |
| **MTEDX** | es | 29.71 |
### SQA Metrics
| Dataset | Language | Accuracy ⬆ |
|:--------------|:-----------|-----------:|
| **Spoken SQuAD** | en | 65.93 |
**NOTE**: Accuracy is measured with an LLM as a judge (**Llama3-70b-8192**, via the Groq API) using the following prompts:
- **System prompt**
You are a helpful assistant that evaluates answers to questions given a certain context. You will be given inputs of the form:<br>
Context: \<CONTEXT\><br>
Question: \<QUESTION\><br>
Answer: \<ANSWER\><br>
Your task is to determine if the given answer is correct or not, assuming the correct answer is contained in the context. Your response should be formatted as a JSON string having the following structure:
{"correct_answer": \<true/false\>, "rationale": \<RATIONALE\>}
where 'rationale' must be a string explaining why the answer is correct or incorrect. If you need to include double quote characters (") in the 'rationale' string, you must escape them with a backslash (\\). For example, if you want to include the string "Hello, World!", you should write it as \\"Hello, World!\\".
- **User prompt**
Context: \<CONTEXT\><br>
Question: \<QUESTION\><br>
Answer: \<ANSWER\>
### MT Metrics
| Dataset | Source Language | Target Language | Bleu ⬆ | CHRF ⬆ |
|:----------|:------------------|:------------------|-------:|-------:|
| **FLORES** | en | de | 21.11 | 51.77 |
| **FLORES** | en | es | 18.61 | 48.02 |
| **FLORES** | en | it | 16.63 | 47.24 |
| **ACL6060** | en | fr | 34.86 | 60.48 |
| **FLORES** | en | fr | 24 | 55.36 |
### SSUM Metrics
| Dataset | Language | R-1_F1 | R-2_F1 | R-L_F1 |
|:----------|:-----------|---------:|---------:|---------:|
| **ICSI** | en | 22.9 | 2.7 | 20.4 |
### ST Metrics
| Dataset | Source Language | Target Language | Bleu ⬆ | CHRF ⬆ |
|:----------|:------------------|:------------------|-------:|-------:|
| **ACL6060** | en | fr | 28.65 | 56.2 |
| **ACL6060** | en | de | 19.12 | 49.06 |
| **MUSTC** | en | de | 16.98 | 45.48 |
| **MUSTC** | en | it | 14.68 | 43.03 |
| **MUSTC** | en | fr | 19.09 | 48.09 |
| **MUSTC** | en | es | 20.42 | 49.07 |
## Framework versions
- Transformers 4.45.0
- Pytorch 2.3.1+cu124.post2
- Datasets 3.2.0
- Tokenizers 0.20.0
|
yalcineray/tarihci-gemma3-12b
|
yalcineray
| 2025-08-20T10:30:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/gemma-3-12b-it-unsloth-bnb-4bit",
"base_model:quantized:unsloth/gemma-3-12b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
image-text-to-text
| 2025-08-19T21:34:03Z |
---
base_model: unsloth/gemma-3-12b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** yalcineray
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-12b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
brown-kudi-girl-viral-video-Clip/News.full.videos.brown.girl.Viral.Video.Official.Tutorial
|
brown-kudi-girl-viral-video-Clip
| 2025-08-20T10:29:27Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-20T10:28:05Z |
<a href="https://sdu.sk/AyL"><img src="https://files.qatarliving.com/event/2025/06/20/Jawan69_0-1749987397680.gif" alt="fsd" /></a>
<a href="https://sdu.sk/AyL" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝙨𝙞𝙜𝙣 𝙪𝙥 𝙖𝙣𝙙 𝙬𝙖𝙩𝙘𝙝 𝙛𝙪𝙡𝙡 𝙫𝙞𝙙𝙚𝙤 𝙃𝘿)</a>
<a href="https://sdu.sk/AyL" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤)</a>
|
ElbertFliek/MyGemmaNPC
|
ElbertFliek
| 2025-08-20T10:28:44Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T14:38:22Z |
---
base_model: google/gemma-3-270m-it
library_name: transformers
model_name: MyGemmaNPC
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for MyGemmaNPC
This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ElbertFliek/MyGemmaNPC", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.8.0+cu126
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Ani24/SFM_Finetuned
|
Ani24
| 2025-08-20T10:28:23Z | 0 | 0 | null |
[
"Seismic",
"Geology",
"Foundation",
"en",
"dataset:porestar/crossdomainfoundationmodeladaption-seismicfacies",
"dataset:porestar/seismicfoundationmodel-geobody",
"dataset:porestar/seismicfoundationmodel-denoise",
"license:apache-2.0",
"region:us"
] | null | 2025-08-20T10:23:04Z |
---
license: apache-2.0
datasets:
- porestar/crossdomainfoundationmodeladaption-seismicfacies
- porestar/seismicfoundationmodel-geobody
- porestar/seismicfoundationmodel-denoise
language:
- en
metrics:
- accuracy
tags:
- Seismic
- Geology
- Foundation
---
|
coelacanthxyz/blockassist-bc-finicky_thriving_grouse_1755683923
|
coelacanthxyz
| 2025-08-20T10:26:45Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"finicky thriving grouse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T10:26:39Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- finicky thriving grouse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
martarroyo/marroyo-lora
|
martarroyo
| 2025-08-20T10:26:30Z | 1 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-02-14T12:30:24Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: martarroyo
---
# Marroyo Lora
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `martarroyo` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "martarroyo",
"lora_weights": "https://huggingface.co/martarroyo/marroyo-lora/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('martarroyo/marroyo-lora', weight_name='lora.safetensors')
image = pipeline('martarroyo').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/martarroyo/marroyo-lora/discussions) to add images that show off what you’ve made with this LoRA.
|
valleriee/pii-model-16
|
valleriee
| 2025-08-20T10:26:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-20T10:21:34Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
canoplos112/blockassist-bc-yapping_sleek_squirrel_1755685414
|
canoplos112
| 2025-08-20T10:25:47Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yapping sleek squirrel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T10:24:19Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yapping sleek squirrel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1755685321
|
Ferdi3425
| 2025-08-20T10:23:36Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious deadly otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T10:23:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious deadly otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
8li/flux_prayer
|
8li
| 2025-08-20T10:23:29Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"text-to-image",
"lora",
"fal",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-20T10:23:20Z |
---
tags:
- flux
- text-to-image
- lora
- diffusers
- fal
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: praying_illustration
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# flux_prayer
<Gallery />
## Model description
a prayer lens
## Trigger words
You should use `praying_illustration` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/8li/flux_prayer/tree/main) them in the Files & versions tab.
## Training at fal.ai
Training was done using [fal.ai/models/fal-ai/flux-lora-fast-training](https://fal.ai/models/fal-ai/flux-lora-fast-training).
|
drawhisper/illustrious-xl
|
drawhisper
| 2025-08-20T10:23:08Z | 0 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-08-16T17:16:21Z |
---
license: apache-2.0
---
|
Uppal-Farm-Girl-Viral-Video-Original-Link/Full.Uppal.Farm.Girl.Viral.Video.Original.Link.Official
|
Uppal-Farm-Girl-Viral-Video-Original-Link
| 2025-08-20T10:21:52Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-20T10:21:22Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5xr5mb3e?leaked-videos/" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
kapalbalap/blockassist-bc-peaceful_wary_owl_1755685257
|
kapalbalap
| 2025-08-20T10:21:46Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"peaceful wary owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T10:21:33Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- peaceful wary owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
afung/pika-towel-folding-ee_absolute-fisheye
|
afung
| 2025-08-20T10:20:48Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"diffusion",
"dataset:afung/pika-towel-folding-ee_absolute-fisheye",
"arxiv:2303.04137",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-19T19:57:00Z |
---
datasets: afung/pika-towel-folding-ee_absolute-fisheye
library_name: lerobot
license: apache-2.0
model_name: diffusion
pipeline_tag: robotics
tags:
- robotics
- diffusion
- lerobot
---
# Model Card for diffusion
<!-- Provide a quick summary of what the model is/does. -->
[Diffusion Policy](https://huggingface.co/papers/2303.04137) treats visuomotor control as a generative diffusion process, producing smooth, multi-step action trajectories that excel at contact-rich manipulation.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
BinBashir/Mobile_NaijaBERT_on_jumia_dataset
|
BinBashir
| 2025-08-20T10:19:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mobilebert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-20T10:19:49Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
helmutsukocok/blockassist-bc-loud_scavenging_kangaroo_1755683603
|
helmutsukocok
| 2025-08-20T10:19:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"loud scavenging kangaroo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T10:18:58Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- loud scavenging kangaroo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
tgrhn/whisper-large-v3-turbo_finetuned-3
|
tgrhn
| 2025-08-20T10:18:43Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-20T10:18:41Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
calegpedia/blockassist-bc-stealthy_slimy_rooster_1755683472
|
calegpedia
| 2025-08-20T10:17:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stealthy slimy rooster",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T10:17:29Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stealthy slimy rooster
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kapalbalap/blockassist-bc-peaceful_wary_owl_1755684996
|
kapalbalap
| 2025-08-20T10:17:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"peaceful wary owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T10:17:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- peaceful wary owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
HMC83/request_writer_smol_lora
|
HMC83
| 2025-08-20T10:16:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"en",
"dataset:HMC83/synthetic_foi_requests",
"base_model:HuggingFaceTB/SmolLM2-360M-Instruct",
"base_model:finetune:HuggingFaceTB/SmolLM2-360M-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-18T09:32:19Z |
---
base_model: HuggingFaceTB/SmolLM2-360M-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
license: apache-2.0
language:
- en
datasets:
- HMC83/synthetic_foi_requests
---
## Model Description
Request Writer Smol has been fine tuned to generate Freedom of Information (FOI) requests to UK public authorities based on the autority name and three keywords. The model has been trained on a synthetic dataset of FOI requests covering various topics and public authorities across the UK.
The Model demonstrates improved generation of properly formatted, focused FOI requests for specific information that are unlikely to be refused on cost grounds.
## Model Architecture
- **Base Model**: SmolLM2-360M-Instruct
- **Fine-tuning Method**: LoRA
- **LoRA Configuration**:
- Rank (r): 8
- Alpha: 16
- Dropout: 0.1
- Target modules: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj
- **Training Parameters**: 2.34% of total parameters trained (8.68M trainable parameters)
## Finetune training Data
### Dataset Details
- **Source**: Synthetic FOI requests dataset (HMC83/synthetic_foi_requests)
- **Size**: 51,308 training examples
- **Format**: Conversational
### Training Configuration
- **Epochs**: 3
- **Batch Size**: 32
- **Learning Rate**: 1e-5
- **Optimizer**: AdamW 8-bit
- **Sequence Length**: 4096 tokens
## Limitations and Considerations
Small size of the model (360M parameters) may limit the complexity of any generated requests. The model is trained specifically for UK FOI requests. It has not been trained to generate requests for information about individuals.
## Usage Guidelines
### Input Format
The model expects a prompt in the form of:
```
Generate a formal Freedom of Information request to [authority_name] using these keywords: [keyword1, keyword2, keyword3]
```
### Output Format
It will try to generate a concinse, properly structured FOI request, starting with the phrase "Please provide me with a copy of the following information:" followed by 1 to 3 Numbered, specific information requests
## Model Versions
### Available Formats
- **LoRA Adapters**: `HMC83/request_writer_smol_lora`
- **Merged 16-bit**: `HMC83/request_writer_smol`
### Disclaimer
Users are responsible for ensuring that their intended use complies with any applicable laws and regulations. Generated requests should be reviewed and potentially modified before submission to public authorities. Requests should be made in good faith and for legitimate purposes. The model can hallucinate, so any outputs should not be relied upon without being verified. Outputs may also reflect any biases that are present in the underlying training data.
|
VoilaRaj/81_KuSJSw
|
VoilaRaj
| 2025-08-20T10:15:34Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-20T10:11:28Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
luismirandacruz/dqn-SpaceInvadersNoFrameskip-v4
|
luismirandacruz
| 2025-08-20T10:14:55Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-20T09:22:08Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 29.00 +/- 64.30
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
SBX (SB3 + Jax): https://github.com/araffin/sbx
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga luismirandacruz -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga luismirandacruz -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga luismirandacruz
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
kapalbalap/blockassist-bc-peaceful_wary_owl_1755684818
|
kapalbalap
| 2025-08-20T10:14:43Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"peaceful wary owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T10:14:28Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- peaceful wary owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ljk1291/Wan2.1_I2V-14B-480P
|
ljk1291
| 2025-08-20T10:14:25Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"video",
"video-generation",
"image-to-video",
"en",
"zh",
"license:apache-2.0",
"diffusers:WanImageToVideoPipeline",
"region:us"
] |
image-to-video
| 2025-08-20T10:14:25Z |
---
license: apache-2.0
language:
- en
- zh
pipeline_tag: image-to-video
library_name: diffusers
tags:
- video
- video-generation
---
# Wan2.1 + Lightx2v
<p align="center">
💜 <a href=""><b>Wan</b></a>    |    🖥️ <a href="https://github.com/Wan-Video/Wan2.1">GitHub</a>    |   🤗 <a href="https://huggingface.co/Wan-AI/">Hugging Face</a>   |   🤖 <a href="https://modelscope.cn/organization/Wan-AI">ModelScope</a>   |    📑 <a href="">Paper (Coming soon)</a>    |    📑 <a href="https://wanxai.com">Blog</a>    |   💬 <a href="https://gw.alicdn.com/imgextra/i2/O1CN01tqjWFi1ByuyehkTSB_!!6000000000015-0-tps-611-1279.jpg">WeChat Group</a>   |    📖 <a href="https://discord.gg/p5XbdQV7">Discord</a>  
<br>
<p align="center">
🔗 <a href="https://huggingface.co/lightx2v/Wan2.1-I2V-14B-480P-StepDistill-CfgDistill-Lightx2v"><b>Lightx2v</b></a> — Distilled & optimized Wan2.1 for fast, high-quality 480P image-to-video generation
</p>
<br>
-----
[**Wan: Open and Advanced Large-Scale Video Generative Models**]() <be>
In this repository, we present **Wan2.1**, a comprehensive and open suite of video foundation models that pushes the boundaries of video generation. **Wan2.1** offers these key features:
- 👍 **SOTA Performance**: **Wan2.1** consistently outperforms existing open-source models and state-of-the-art commercial solutions across multiple benchmarks.
- 👍 **Supports Consumer-grade GPUs**: The T2V-1.3B model requires only 8.19 GB VRAM, making it compatible with almost all consumer-grade GPUs. It can generate a 5-second 480P video on an RTX 4090 in about 4 minutes (without optimization techniques like quantization). Its performance is even comparable to some closed-source models.
- 👍 **Multiple Tasks**: **Wan2.1** excels in Text-to-Video, Image-to-Video, Video Editing, Text-to-Image, and Video-to-Audio, advancing the field of video generation.
- 👍 **Visual Text Generation**: **Wan2.1** is the first video model capable of generating both Chinese and English text, featuring robust text generation that enhances its practical applications.
- 👍 **Powerful Video VAE**: **Wan-VAE** delivers exceptional efficiency and performance, encoding and decoding 1080P videos of any length while preserving temporal information, making it an ideal foundation for video and image generation.
This repo contains our I2V-14B model, which is capable of generating 480P videos, offering advantages in terms of fast generation and excellent quality.
## Video Demos
<div align="center">
<video width="80%" controls>
<source src="https://cloud.video.taobao.com/vod/Jth64Y7wNoPcJki_Bo1ZJTDBvNjsgjlVKsNs05Fqfps.mp4" type="video/mp4">
Your browser does not support the video tag.
</video>
</div>
## 🔥 Latest News!!
* Feb 25, 2025: 👋 We've released the inference code and weights of Wan2.1.
## 📑 Todo List
- Wan2.1 Text-to-Video
- [x] Multi-GPU Inference code of the 14B and 1.3B models
- [x] Checkpoints of the 14B and 1.3B models
- [x] Gradio demo
- [x] Diffusers integration
- [ ] ComfyUI integration
- Wan2.1 Image-to-Video
- [x] Multi-GPU Inference code of the 14B model
- [x] Checkpoints of the 14B model
- [x] Gradio demo
- [x] Diffusers integration
- [ ] ComfyUI integration
## Quickstart
#### Installation
Clone the repo:
```
git clone https://github.com/Wan-Video/Wan2.1.git
cd Wan2.1
```
Install dependencies:
```
# Ensure torch >= 2.4.0
pip install -r requirements.txt
```
#### Model Download
| Models | Download Link | Notes |
| --------------|-------------------------------------------------------------------------------|-------------------------------|
| T2V-14B | 🤗 [Huggingface](https://huggingface.co/Wan-AI/Wan2.1-T2V-14B) 🤖 [ModelScope](https://www.modelscope.cn/models/Wan-AI/Wan2.1-T2V-14B) | Supports both 480P and 720P
| I2V-14B-720P | 🤗 [Huggingface](https://huggingface.co/Wan-AI/Wan2.1-I2V-14B-720P) 🤖 [ModelScope](https://www.modelscope.cn/models/Wan-AI/Wan2.1-I2V-14B-720P) | Supports 720P
| I2V-14B-480P | 🤗 [Huggingface](https://huggingface.co/Wan-AI/Wan2.1-I2V-14B-480P) 🤖 [ModelScope](https://www.modelscope.cn/models/Wan-AI/Wan2.1-I2V-14B-480P) | Supports 480P
| T2V-1.3B | 🤗 [Huggingface](https://huggingface.co/Wan-AI/Wan2.1-T2V-1.3B) 🤖 [ModelScope](https://www.modelscope.cn/models/Wan-AI/Wan2.1-T2V-1.3B) | Supports 480P
> 💡Note: The 1.3B model is capable of generating videos at 720P resolution. However, due to limited training at this resolution, the results are generally less stable compared to 480P. For optimal performance, we recommend using 480P resolution.
Download models using 🤗 huggingface-cli:
```
pip install "huggingface_hub[cli]"
huggingface-cli download Wan-AI/Wan2.1-I2V-14B-480P-Diffusers --local-dir ./Wan2.1-I2V-14B-480P-Diffusers
```
Download models using 🤖 modelscope-cli:
```
pip install modelscope
modelscope download Wan-AI/Wan2.1-I2V-14B-480P-Diffusers --local_dir ./Wan2.1-I2V-14B-480P-Diffusers
```
#### Run Image-to-Video Generation
Similar to Text-to-Video, Image-to-Video is also divided into processes with and without the prompt extension step. The specific parameters and their corresponding settings are as follows:
<table>
<thead>
<tr>
<th rowspan="2">Task</th>
<th colspan="2">Resolution</th>
<th rowspan="2">Model</th>
</tr>
<tr>
<th>480P</th>
<th>720P</th>
</tr>
</thead>
<tbody>
<tr>
<td>i2v-14B</td>
<td style="color: green;">❌</td>
<td style="color: green;">✔️</td>
<td>Wan2.1-I2V-14B-720P</td>
</tr>
<tr>
<td>i2v-14B</td>
<td style="color: green;">✔️</td>
<td style="color: red;">❌</td>
<td>Wan2.1-T2V-14B-480P</td>
</tr>
</tbody>
</table>
##### (1) Without Prompt Extention
- Single-GPU inference
```
python generate.py --task i2v-14B --size 832*480 --ckpt_dir ./Wan2.1-I2V-14B-480P --image examples/i2v_input.JPG --prompt "Summer beach vacation style, a white cat wearing sunglasses sits on a surfboard. The fluffy-furred feline gazes directly at the camera with a relaxed expression. Blurred beach scenery forms the background featuring crystal-clear waters, distant green hills, and a blue sky dotted with white clouds. The cat assumes a naturally relaxed posture, as if savoring the sea breeze and warm sunlight. A close-up shot highlights the feline's intricate details and the refreshing atmosphere of the seaside."
```
> 💡For the Image-to-Video task, the `size` parameter represents the area of the generated video, with the aspect ratio following that of the original input image.
- Multi-GPU inference using FSDP + xDiT USP
```
pip install "xfuser>=0.4.1"
torchrun --nproc_per_node=8 generate.py --task i2v-14B --size 832*480 --ckpt_dir ./Wan2.1-I2V-14B-480P --image examples/i2v_input.JPG --dit_fsdp --t5_fsdp --ulysses_size 8 --prompt "Summer beach vacation style, a white cat wearing sunglasses sits on a surfboard. The fluffy-furred feline gazes directly at the camera with a relaxed expression. Blurred beach scenery forms the background featuring crystal-clear waters, distant green hills, and a blue sky dotted with white clouds. The cat assumes a naturally relaxed posture, as if savoring the sea breeze and warm sunlight. A close-up shot highlights the feline's intricate details and the refreshing atmosphere of the seaside."
```
Wan can also be run directly using 🤗 Diffusers!
```python
import torch
import numpy as np
from diffusers import AutoencoderKLWan, WanImageToVideoPipeline
from diffusers.utils import export_to_video, load_image
from transformers import CLIPVisionModel
# Available models: Wan-AI/Wan2.1-I2V-14B-480P-Diffusers, Wan-AI/Wan2.1-I2V-14B-720P-Diffusers
model_id = "Wan-AI/Wan2.1-I2V-14B-480P-Diffusers"
image_encoder = CLIPVisionModel.from_pretrained(model_id, subfolder="image_encoder", torch_dtype=torch.float32)
vae = AutoencoderKLWan.from_pretrained(model_id, subfolder="vae", torch_dtype=torch.float32)
pipe = WanImageToVideoPipeline.from_pretrained(model_id, vae=vae, image_encoder=image_encoder, torch_dtype=torch.bfloat16)
pipe.to("cuda")
image = load_image(
"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/astronaut.jpg"
)
max_area = 480 * 832
aspect_ratio = image.height / image.width
mod_value = pipe.vae_scale_factor_spatial * pipe.transformer.config.patch_size[1]
height = round(np.sqrt(max_area * aspect_ratio)) // mod_value * mod_value
width = round(np.sqrt(max_area / aspect_ratio)) // mod_value * mod_value
image = image.resize((width, height))
prompt = (
"An astronaut hatching from an egg, on the surface of the moon, the darkness and depth of space realised in "
"the background. High quality, ultrarealistic detail and breath-taking movie-like camera shot."
)
negative_prompt = "Bright tones, overexposed, static, blurred details, subtitles, style, works, paintings, images, static, overall gray, worst quality, low quality, JPEG compression residue, ugly, incomplete, extra fingers, poorly drawn hands, poorly drawn faces, deformed, disfigured, misshapen limbs, fused fingers, still picture, messy background, three legs, many people in the background, walking backwards"
output = pipe(
image=image, prompt=prompt, negative_prompt=negative_prompt, height=height, width=width, num_frames=81, guidance_scale=5.0
).frames[0]
export_to_video(output, "output.mp4", fps=16)
```
##### (2) Using Prompt Extention
Run with local prompt extention using `Qwen/Qwen2.5-VL-7B-Instruct`:
```
python generate.py --task i2v-14B --size 832*480 --ckpt_dir ./Wan2.1-I2V-14B-480P --image examples/i2v_input.JPG --use_prompt_extend --prompt_extend_model Qwen/Qwen2.5-VL-7B-Instruct --prompt "Summer beach vacation style, a white cat wearing sunglasses sits on a surfboard. The fluffy-furred feline gazes directly at the camera with a relaxed expression. Blurred beach scenery forms the background featuring crystal-clear waters, distant green hills, and a blue sky dotted with white clouds. The cat assumes a naturally relaxed posture, as if savoring the sea breeze and warm sunlight. A close-up shot highlights the feline's intricate details and the refreshing atmosphere of the seaside."
```
Run with remote prompt extention using `dashscope`:
```
DASH_API_KEY=your_key python generate.py --task i2v-14B --size 832*480 --ckpt_dir ./Wan2.1-I2V-14B-480P --image examples/i2v_input.JPG --use_prompt_extend --prompt_extend_method 'dashscope' --prompt "Summer beach vacation style, a white cat wearing sunglasses sits on a surfboard. The fluffy-furred feline gazes directly at the camera with a relaxed expression. Blurred beach scenery forms the background featuring crystal-clear waters, distant green hills, and a blue sky dotted with white clouds. The cat assumes a naturally relaxed posture, as if savoring the sea breeze and warm sunlight. A close-up shot highlights the feline's intricate details and the refreshing atmosphere of the seaside."
```
##### (3) Runing local gradio
```
cd gradio
# if one only uses 480P model in gradio
DASH_API_KEY=your_key python i2v_14B_singleGPU.py --prompt_extend_method 'dashscope' --ckpt_dir_480p ./Wan2.1-I2V-14B-480P
# if one only uses 720P model in gradio
DASH_API_KEY=your_key python i2v_14B_singleGPU.py --prompt_extend_method 'dashscope' --ckpt_dir_720p ./Wan2.1-I2V-14B-720P
# if one uses both 480P and 720P models in gradio
DASH_API_KEY=your_key python i2v_14B_singleGPU.py --prompt_extend_method 'dashscope' --ckpt_dir_480p ./Wan2.1-I2V-14B-480P --ckpt_dir_720p ./Wan2.1-I2V-14B-720P
```
## Manual Evaluation
We conducted extensive manual evaluations to evaluate the performance of the Image-to-Video model, and the results are presented in the table below. The results clearly indicate that **Wan2.1** outperforms both closed-source and open-source models.
<div align="center">
<img src="assets/i2v_res.png" alt="" style="width: 80%;" />
</div>
## Computational Efficiency on Different GPUs
We test the computational efficiency of different **Wan2.1** models on different GPUs in the following table. The results are presented in the format: **Total time (s) / peak GPU memory (GB)**.
<div align="center">
<img src="assets/comp_effic.png" alt="" style="width: 80%;" />
</div>
> The parameter settings for the tests presented in this table are as follows:
> (1) For the 1.3B model on 8 GPUs, set `--ring_size 8` and `--ulysses_size 1`;
> (2) For the 14B model on 1 GPU, use `--offload_model True`;
> (3) For the 1.3B model on a single 4090 GPU, set `--offload_model True --t5_cpu`;
> (4) For all testings, no prompt extension was applied, meaning `--use_prompt_extend` was not enabled.
-------
## Introduction of Wan2.1
**Wan2.1** is designed on the mainstream diffusion transformer paradigm, achieving significant advancements in generative capabilities through a series of innovations. These include our novel spatio-temporal variational autoencoder (VAE), scalable training strategies, large-scale data construction, and automated evaluation metrics. Collectively, these contributions enhance the model’s performance and versatility.
##### (1) 3D Variational Autoencoders
We propose a novel 3D causal VAE architecture, termed **Wan-VAE** specifically designed for video generation. By combining multiple strategies, we improve spatio-temporal compression, reduce memory usage, and ensure temporal causality. **Wan-VAE** demonstrates significant advantages in performance efficiency compared to other open-source VAEs. Furthermore, our **Wan-VAE** can encode and decode unlimited-length 1080P videos without losing historical temporal information, making it particularly well-suited for video generation tasks.
<div align="center">
<img src="assets/video_vae_res.jpg" alt="" style="width: 80%;" />
</div>
##### (2) Video Diffusion DiT
**Wan2.1** is designed using the Flow Matching framework within the paradigm of mainstream Diffusion Transformers. Our model's architecture uses the T5 Encoder to encode multilingual text input, with cross-attention in each transformer block embedding the text into the model structure. Additionally, we employ an MLP with a Linear layer and a SiLU layer to process the input time embeddings and predict six modulation parameters individually. This MLP is shared across all transformer blocks, with each block learning a distinct set of biases. Our experimental findings reveal a significant performance improvement with this approach at the same parameter scale.
<div align="center">
<img src="assets/video_dit_arch.jpg" alt="" style="width: 80%;" />
</div>
| Model | Dimension | Input Dimension | Output Dimension | Feedforward Dimension | Frequency Dimension | Number of Heads | Number of Layers |
|--------|-----------|-----------------|------------------|-----------------------|---------------------|-----------------|------------------|
| 1.3B | 1536 | 16 | 16 | 8960 | 256 | 12 | 30 |
| 14B | 5120 | 16 | 16 | 13824 | 256 | 40 | 40 |
##### Data
We curated and deduplicated a candidate dataset comprising a vast amount of image and video data. During the data curation process, we designed a four-step data cleaning process, focusing on fundamental dimensions, visual quality and motion quality. Through the robust data processing pipeline, we can easily obtain high-quality, diverse, and large-scale training sets of images and videos.

##### Comparisons to SOTA
We compared **Wan2.1** with leading open-source and closed-source models to evaluate the performace. Using our carefully designed set of 1,035 internal prompts, we tested across 14 major dimensions and 26 sub-dimensions. We then compute the total score by performing a weighted calculation on the scores of each dimension, utilizing weights derived from human preferences in the matching process. The detailed results are shown in the table below. These results demonstrate our model's superior performance compared to both open-source and closed-source models.

## Citation
If you find our work helpful, please cite us.
```
@article{wan2.1,
title = {Wan: Open and Advanced Large-Scale Video Generative Models},
author = {Wan Team},
journal = {},
year = {2025}
}
```
## License Agreement
The models in this repository are licensed under the Apache 2.0 License. We claim no rights over the your generate contents, granting you the freedom to use them while ensuring that your usage complies with the provisions of this license. You are fully accountable for your use of the models, which must not involve sharing any content that violates applicable laws, causes harm to individuals or groups, disseminates personal information intended for harm, spreads misinformation, or targets vulnerable populations. For a complete list of restrictions and details regarding your rights, please refer to the full text of the [license](LICENSE.txt).
## Acknowledgements
We would like to thank the contributors to the [SD3](https://huggingface.co/stabilityai/stable-diffusion-3-medium), [Qwen](https://huggingface.co/Qwen), [umt5-xxl](https://huggingface.co/google/umt5-xxl), [diffusers](https://github.com/huggingface/diffusers) and [HuggingFace](https://huggingface.co) repositories, for their open research.
## Contact Us
If you would like to leave a message to our research or product teams, feel free to join our [Discord](https://discord.gg/p5XbdQV7) or [WeChat groups](https://gw.alicdn.com/imgextra/i2/O1CN01tqjWFi1ByuyehkTSB_!!6000000000015-0-tps-611-1279.jpg)!
|
chainway9/blockassist-bc-untamed_quick_eel_1755683156
|
chainway9
| 2025-08-20T10:14:20Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"untamed quick eel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T10:14:16Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- untamed quick eel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sampingkaca72/blockassist-bc-armored_stealthy_elephant_1755683358
|
sampingkaca72
| 2025-08-20T10:14:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"armored stealthy elephant",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T10:14:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- armored stealthy elephant
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
chooseL1fe/blockassist-bc-thorny_flightless_albatross_1755684325
|
chooseL1fe
| 2025-08-20T10:13:39Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny flightless albatross",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T10:13:30Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny flightless albatross
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
xinnn32/blockassist-bc-meek_winged_caterpillar_1755684620
|
xinnn32
| 2025-08-20T10:10:51Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"meek winged caterpillar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T10:10:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- meek winged caterpillar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jxchlee/koelectra-base-summarization1
|
jxchlee
| 2025-08-20T10:09:51Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"electra",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-13T06:26:41Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This is temporary model while training to make Korean article Summarize model.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hakimjustbao/blockassist-bc-raging_subtle_wasp_1755682963
|
hakimjustbao
| 2025-08-20T10:09:42Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"raging subtle wasp",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T10:09:39Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- raging subtle wasp
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
b0bbyhill/blockassist-bc-grunting_iridescent_anaconda_1755684505
|
b0bbyhill
| 2025-08-20T10:09:20Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"grunting iridescent anaconda",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T10:09:10Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- grunting iridescent anaconda
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kojeklollipop/blockassist-bc-spotted_amphibious_stork_1755682976
|
kojeklollipop
| 2025-08-20T10:09:10Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"spotted amphibious stork",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T10:09:06Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- spotted amphibious stork
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lautan/blockassist-bc-gentle_patterned_goat_1755682762
|
lautan
| 2025-08-20T10:08:19Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gentle patterned goat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T10:08:15Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gentle patterned goat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kapalbalap/blockassist-bc-peaceful_wary_owl_1755684308
|
kapalbalap
| 2025-08-20T10:06:10Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"peaceful wary owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T10:05:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- peaceful wary owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
coppertoy/blockassist-bc-armored_marine_chicken_1755684318
|
coppertoy
| 2025-08-20T10:05:27Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"armored marine chicken",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T10:05:18Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- armored marine chicken
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
katanyasekolah/blockassist-bc-silky_sprightly_cassowary_1755682636
|
katanyasekolah
| 2025-08-20T10:03:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"silky sprightly cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T10:03:29Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- silky sprightly cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
quantumxnode/blockassist-bc-dormant_peckish_seahorse_1755682660
|
quantumxnode
| 2025-08-20T10:03:21Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"dormant peckish seahorse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T10:03:17Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- dormant peckish seahorse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Team-Atom/smolvla_record_pp_ryb_t_64_40000
|
Team-Atom
| 2025-08-20T10:03:11Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"smolvla",
"dataset:Team-Atom/PiPl_RYB_test",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-20T10:02:52Z |
---
base_model: lerobot/smolvla_base
datasets: Team-Atom/PiPl_RYB_test
library_name: lerobot
license: apache-2.0
model_name: smolvla
pipeline_tag: robotics
tags:
- robotics
- smolvla
- lerobot
---
# Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
manusiaperahu2012/blockassist-bc-roaring_long_tuna_1755682406
|
manusiaperahu2012
| 2025-08-20T10:03:03Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"roaring long tuna",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T10:02:58Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- roaring long tuna
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
vwzyrraz7l/blockassist-bc-tall_hunting_vulture_1755682504
|
vwzyrraz7l
| 2025-08-20T10:02:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tall hunting vulture",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T10:02:29Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tall hunting vulture
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Nerva1228/baleifei
|
Nerva1228
| 2025-08-20T10:01:54Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-20T10:01:52Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: baleifei
---
# Baleifei
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `baleifei` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "baleifei",
"lora_weights": "https://huggingface.co/Nerva1228/baleifei/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Nerva1228/baleifei', weight_name='lora.safetensors')
image = pipeline('baleifei').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 5e-05
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Nerva1228/baleifei/discussions) to add images that show off what you’ve made with this LoRA.
|
alessiodevoto/exp_att_stats_meta2_test2_10_100_4
|
alessiodevoto
| 2025-08-20T10:00:23Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-08-20T09:46:07Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
thanobidex/blockassist-bc-colorful_shiny_hare_1755682346
|
thanobidex
| 2025-08-20T09:59:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"colorful shiny hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T09:59:28Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- colorful shiny hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
XX-VIDEOS-Uppal-Farm-Girl-Viral-Video-Link/New.full.videos.Uppal.Farm.Girl.Viral.Video.Official.Tutorial.telegram.link
|
XX-VIDEOS-Uppal-Farm-Girl-Viral-Video-Link
| 2025-08-20T09:59:21Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-20T09:57:04Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5xr5mb3e?leaked-videos/" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
aralper18/blockassist-bc-gilded_tangled_albatross_1755683825
|
aralper18
| 2025-08-20T09:58:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gilded tangled albatross",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T09:58:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gilded tangled albatross
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1755683828
|
Ferdi3425
| 2025-08-20T09:58:23Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious deadly otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T09:57:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious deadly otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
VoilaRaj/81_oVkGcE
|
VoilaRaj
| 2025-08-20T09:58:22Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-20T09:54:30Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
DFQ-Dojo/yolo11l-dfq-lsqw4a8
|
DFQ-Dojo
| 2025-08-20T09:57:44Z | 0 | 0 |
dfq-toolkit
|
[
"dfq-toolkit",
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"arxiv:2507.16782",
"region:us"
] | null | 2025-08-20T09:47:57Z |
---
library_name: dfq-toolkit
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: https://github.com/DFQ-Dojo/dfq-toolkit
- Paper: https://arxiv.org/abs/2507.16782
- Docs: [More Information Needed]
|
kapalbalap/blockassist-bc-peaceful_wary_owl_1755683745
|
kapalbalap
| 2025-08-20T09:56:50Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"peaceful wary owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T09:56:36Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- peaceful wary owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
saranyabalakumar/ppo-Huggy
|
saranyabalakumar
| 2025-08-20T09:56:44Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2025-08-20T09:56:27Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: saranyabalakumar/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
koloni/blockassist-bc-deadly_graceful_stingray_1755682117
|
koloni
| 2025-08-20T09:56:09Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly graceful stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T09:56:05Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly graceful stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Nerva1228/guagua
|
Nerva1228
| 2025-08-20T09:55:38Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-20T09:55:37Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: guagua
---
# Guagua
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `guagua` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "guagua",
"lora_weights": "https://huggingface.co/Nerva1228/guagua/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Nerva1228/guagua', weight_name='lora.safetensors')
image = pipeline('guagua').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Nerva1228/guagua/discussions) to add images that show off what you’ve made with this LoRA.
|
liukevin666/blockassist-bc-yawning_striped_cassowary_1755683509
|
liukevin666
| 2025-08-20T09:54:56Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yawning striped cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T09:53:28Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yawning striped cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
priyankrathore/led-large-lora-bert
|
priyankrathore
| 2025-08-20T09:52:48Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"led",
"text2text-generation",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-20T09:50:10Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
video-filtrado-de-abigail-lalama/Ver.Original.de.abigail.y.snayder.intimo.video.de.lalama.y.snayder.abigail.video
|
video-filtrado-de-abigail-lalama
| 2025-08-20T09:52:29Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-20T09:52:10Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5xr5mb3e?leaked-videos/" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
b0bbyhill/blockassist-bc-grunting_iridescent_anaconda_1755683338
|
b0bbyhill
| 2025-08-20T09:49:51Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"grunting iridescent anaconda",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T09:49:40Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- grunting iridescent anaconda
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
milliarderdol/blockassist-bc-roaring_rough_scorpion_1755681330
|
milliarderdol
| 2025-08-20T09:48:56Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"roaring rough scorpion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T09:48:01Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- roaring rough scorpion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
helmutsukocok/blockassist-bc-loud_scavenging_kangaroo_1755681711
|
helmutsukocok
| 2025-08-20T09:48:04Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"loud scavenging kangaroo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T09:48:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- loud scavenging kangaroo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
DFQ-Dojo/yolo11l-dfq-lsqw6a6
|
DFQ-Dojo
| 2025-08-20T09:47:23Z | 0 | 0 |
dfq-toolkit
|
[
"dfq-toolkit",
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"arxiv:2507.16782",
"region:us"
] | null | 2025-08-20T09:39:23Z |
---
library_name: dfq-toolkit
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: https://github.com/DFQ-Dojo/dfq-toolkit
- Paper: https://arxiv.org/abs/2507.16782
- Docs: [More Information Needed]
|
kapalbalap/blockassist-bc-peaceful_wary_owl_1755683114
|
kapalbalap
| 2025-08-20T09:46:18Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"peaceful wary owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T09:46:05Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- peaceful wary owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/gpt-oss-120b-mandarin-thinking-GGUF
|
mradermacher
| 2025-08-20T09:45:43Z | 0 | 0 |
transformers
|
[
"transformers",
"zh",
"base_model:FreeSEED-AI/gpt-oss-120b-mandarin-thinking",
"base_model:finetune:FreeSEED-AI/gpt-oss-120b-mandarin-thinking",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T04:55:34Z |
---
base_model: FreeSEED-AI/gpt-oss-120b-mandarin-thinking
language:
- zh
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/FreeSEED-AI/gpt-oss-120b-mandarin-thinking
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#gpt-oss-120b-mandarin-thinking-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/gpt-oss-120b-mandarin-thinking-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [PART 1](https://huggingface.co/mradermacher/gpt-oss-120b-mandarin-thinking-GGUF/resolve/main/gpt-oss-120b-mandarin-thinking.Q3_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/gpt-oss-120b-mandarin-thinking-GGUF/resolve/main/gpt-oss-120b-mandarin-thinking.Q3_K_S.gguf.part2of2) | Q3_K_S | 66.2 | |
| [PART 1](https://huggingface.co/mradermacher/gpt-oss-120b-mandarin-thinking-GGUF/resolve/main/gpt-oss-120b-mandarin-thinking.Q2_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/gpt-oss-120b-mandarin-thinking-GGUF/resolve/main/gpt-oss-120b-mandarin-thinking.Q2_K.gguf.part2of2) | Q2_K | 66.3 | |
| [PART 1](https://huggingface.co/mradermacher/gpt-oss-120b-mandarin-thinking-GGUF/resolve/main/gpt-oss-120b-mandarin-thinking.IQ4_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/gpt-oss-120b-mandarin-thinking-GGUF/resolve/main/gpt-oss-120b-mandarin-thinking.IQ4_XS.gguf.part2of2) | IQ4_XS | 67.1 | |
| [PART 1](https://huggingface.co/mradermacher/gpt-oss-120b-mandarin-thinking-GGUF/resolve/main/gpt-oss-120b-mandarin-thinking.Q3_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/gpt-oss-120b-mandarin-thinking-GGUF/resolve/main/gpt-oss-120b-mandarin-thinking.Q3_K_M.gguf.part2of2) | Q3_K_M | 71.2 | lower quality |
| [PART 1](https://huggingface.co/mradermacher/gpt-oss-120b-mandarin-thinking-GGUF/resolve/main/gpt-oss-120b-mandarin-thinking.Q3_K_L.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/gpt-oss-120b-mandarin-thinking-GGUF/resolve/main/gpt-oss-120b-mandarin-thinking.Q3_K_L.gguf.part2of2) | Q3_K_L | 73.5 | |
| [PART 1](https://huggingface.co/mradermacher/gpt-oss-120b-mandarin-thinking-GGUF/resolve/main/gpt-oss-120b-mandarin-thinking.Q4_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/gpt-oss-120b-mandarin-thinking-GGUF/resolve/main/gpt-oss-120b-mandarin-thinking.Q4_K_S.gguf.part2of2) | Q4_K_S | 81.0 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/gpt-oss-120b-mandarin-thinking-GGUF/resolve/main/gpt-oss-120b-mandarin-thinking.Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/gpt-oss-120b-mandarin-thinking-GGUF/resolve/main/gpt-oss-120b-mandarin-thinking.Q4_K_M.gguf.part2of2) | Q4_K_M | 88.0 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/gpt-oss-120b-mandarin-thinking-GGUF/resolve/main/gpt-oss-120b-mandarin-thinking.Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/gpt-oss-120b-mandarin-thinking-GGUF/resolve/main/gpt-oss-120b-mandarin-thinking.Q5_K_S.gguf.part2of2) | Q5_K_S | 88.1 | |
| [PART 1](https://huggingface.co/mradermacher/gpt-oss-120b-mandarin-thinking-GGUF/resolve/main/gpt-oss-120b-mandarin-thinking.Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/gpt-oss-120b-mandarin-thinking-GGUF/resolve/main/gpt-oss-120b-mandarin-thinking.Q5_K_M.gguf.part2of2) | Q5_K_M | 94.0 | |
| [PART 1](https://huggingface.co/mradermacher/gpt-oss-120b-mandarin-thinking-GGUF/resolve/main/gpt-oss-120b-mandarin-thinking.Q6_K.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/gpt-oss-120b-mandarin-thinking-GGUF/resolve/main/gpt-oss-120b-mandarin-thinking.Q6_K.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/gpt-oss-120b-mandarin-thinking-GGUF/resolve/main/gpt-oss-120b-mandarin-thinking.Q6_K.gguf.part3of3) | Q6_K | 124.3 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/gpt-oss-120b-mandarin-thinking-GGUF/resolve/main/gpt-oss-120b-mandarin-thinking.Q8_0.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/gpt-oss-120b-mandarin-thinking-GGUF/resolve/main/gpt-oss-120b-mandarin-thinking.Q8_0.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/gpt-oss-120b-mandarin-thinking-GGUF/resolve/main/gpt-oss-120b-mandarin-thinking.Q8_0.gguf.part3of3) | Q8_0 | 124.4 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1755683004
|
Ferdi3425
| 2025-08-20T09:44:40Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious deadly otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T09:44:12Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious deadly otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
tomahawk810/my-awesome-model
|
tomahawk810
| 2025-08-20T09:44:21Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-08-20T09:44:14Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.