modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
lisaozill03/blockassist-bc-rugged_prickly_alpaca_1755714419
|
lisaozill03
| 2025-08-20T18:52:42Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rugged prickly alpaca",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T18:52:38Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rugged prickly alpaca
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
0xaoyama/blockassist-bc-muscular_zealous_gorilla_1755715920
|
0xaoyama
| 2025-08-20T18:52:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"muscular zealous gorilla",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T18:52:21Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- muscular zealous gorilla
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
calegpedia/blockassist-bc-stealthy_slimy_rooster_1755714317
|
calegpedia
| 2025-08-20T18:51:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stealthy slimy rooster",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T18:51:46Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stealthy slimy rooster
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Dolboebina/Affine-5Dvs9oXB9TX4hyLac525mqVKXWPCVHZLpddu1pDzwsFDqEEx
|
Dolboebina
| 2025-08-20T18:51:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_oss",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-20T18:49:05Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nparra10/lora_gemma-3-4b-it_train_img_5_instruction_20250820_1849
|
nparra10
| 2025-08-20T18:50:21Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:google/gemma-3-4b-it",
"base_model:finetune:google/gemma-3-4b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-08-20T18:49:06Z |
---
base_model: google/gemma-3-4b-it
library_name: transformers
model_name: lora_gemma-3-4b-it_train_img_5_instruction_20250820_1849
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for lora_gemma-3-4b-it_train_img_5_instruction_20250820_1849
This model is a fine-tuned version of [google/gemma-3-4b-it](https://huggingface.co/google/gemma-3-4b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="nparra10/lora_gemma-3-4b-it_train_img_5_instruction_20250820_1849", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.19.1
- Transformers: 4.53.2
- Pytorch: 2.6.0
- Datasets: 4.0.0
- Tokenizers: 0.21.2
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Uppal-farm-l-eak-v-iral-v-ideo/Uppal.farm.leak.viral.video
|
Uppal-farm-l-eak-v-iral-v-ideo
| 2025-08-20T18:50:09Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-20T18:47:52Z |
<!-- HTML_TAG_END --><div>
<p><a rel="nofollow" href="https://leaked-videos.com/?v=Uppal farm">๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐๐๐ญ๐๐ก ๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ)</a></p>
<p><a rel="nofollow" href="https://leaked-videos.com/?v=Uppal farm">๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐๐ข๐ง๐ค )</a></p>
<p><a rel="nofollow" href="https://leaked-videos.com/?v=Uppal farm"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a></p>
<!-- HTML_TAG_END --></div>
|
Prabh-viral-video/Prabh.viral.video
|
Prabh-viral-video
| 2025-08-20T18:49:48Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-20T18:46:10Z |
<!-- HTML_TAG_END --><div>
<p><a rel="nofollow" href="https://leaked-videos.com/?v=Prabh">๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐๐๐ญ๐๐ก ๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ)</a></p>
<p><a rel="nofollow" href="https://leaked-videos.com/?v=Prabh">๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐๐ข๐ง๐ค )</a></p>
<p><a rel="nofollow" href="https://leaked-videos.com/?v=Prabh"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a></p>
<!-- HTML_TAG_END --></div>
|
Trungdjoon/esg_score-visobert-governance
|
Trungdjoon
| 2025-08-20T18:49:40Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-20T18:49:00Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
TingchenFu/general_reason_3k_qwen-2.5-math-7b_06091810
|
TingchenFu
| 2025-08-20T18:48:45Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-20T18:44:22Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AnonymousCS/xlmr_immigration_combo23_0
|
AnonymousCS
| 2025-08-20T18:48:13Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-20T18:43:38Z |
---
library_name: transformers
license: mit
base_model: FacebookAI/xlm-roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: xlmr_immigration_combo23_0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmr_immigration_combo23_0
This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2406
- Accuracy: 0.9152
- 1-f1: 0.8745
- 1-recall: 0.8880
- 1-precision: 0.8614
- Balanced Acc: 0.9084
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:|
| 0.6247 | 1.0 | 25 | 0.6268 | 0.6671 | 0.0 | 0.0 | 0.0 | 0.5 |
| 0.492 | 2.0 | 50 | 0.4141 | 0.8599 | 0.7361 | 0.5869 | 0.9870 | 0.7915 |
| 0.2709 | 3.0 | 75 | 0.2236 | 0.9177 | 0.8678 | 0.8108 | 0.9333 | 0.8910 |
| 0.1673 | 4.0 | 100 | 0.2128 | 0.9190 | 0.8743 | 0.8456 | 0.9050 | 0.9006 |
| 0.1798 | 5.0 | 125 | 0.2203 | 0.9293 | 0.8898 | 0.8571 | 0.925 | 0.9112 |
| 0.1456 | 6.0 | 150 | 0.2406 | 0.9152 | 0.8745 | 0.8880 | 0.8614 | 0.9084 |
### Framework versions
- Transformers 4.56.0.dev0
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
|
watch-uppal-farm-girl-viral-video-link/Uppal.farm.girl.viral.video.original.link
|
watch-uppal-farm-girl-viral-video-link
| 2025-08-20T18:48:08Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-20T18:47:14Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5xr5mb3e?leaked-videos/" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
quantumxnode/blockassist-bc-dormant_peckish_seahorse_1755714085
|
quantumxnode
| 2025-08-20T18:47:31Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"dormant peckish seahorse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T18:47:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- dormant peckish seahorse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Trungdjoon/esg_score-phobert-base-governance
|
Trungdjoon
| 2025-08-20T18:47:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-20T18:46:41Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
zeliang0426/Qwen25-3-Cache-Sink
|
zeliang0426
| 2025-08-20T18:47:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_adapter",
"text-generation",
"generated_from_trainer",
"grpo",
"trl",
"conversational",
"custom_code",
"arxiv:2402.03300",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2025-08-19T22:49:30Z |
---
library_name: transformers
model_name: Qwen25-3-Cache-Sink
tags:
- generated_from_trainer
- grpo
- trl
licence: license
---
# Model Card for Qwen25-3-Cache-Sink
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="zeliang0426/Qwen25-3-Cache-Sink", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/zlzhang/verl/runs/7244332114.33911-1a513761-5ae0-488a-aabe-f1186884d679)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.20.0.dev0
- Transformers: 4.53.0
- Pytorch: 2.7.1+cu126
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
0xaoyama/blockassist-bc-muscular_zealous_gorilla_1755715542
|
0xaoyama
| 2025-08-20T18:46:15Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"muscular zealous gorilla",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T18:46:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- muscular zealous gorilla
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Trungdjoon/esg_score-deberta-governance
|
Trungdjoon
| 2025-08-20T18:45:52Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-20T18:44:57Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
canoplos112/blockassist-bc-yapping_sleek_squirrel_1755715437
|
canoplos112
| 2025-08-20T18:45:50Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yapping sleek squirrel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T18:44:34Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yapping sleek squirrel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mohda/blockassist-bc-regal_fierce_hummingbird_1755715394
|
mohda
| 2025-08-20T18:44:48Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"regal fierce hummingbird",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T18:44:41Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- regal fierce hummingbird
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
katanyasekolah/blockassist-bc-silky_sprightly_cassowary_1755713712
|
katanyasekolah
| 2025-08-20T18:44:41Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"silky sprightly cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T18:44:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- silky sprightly cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
TingchenFu/general_reason_3k_qwen-2.5-math-1.5b_06021434
|
TingchenFu
| 2025-08-20T18:44:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-20T18:42:19Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
New-Clip-prabh-go-viral-video-Orginal/Clip.New.full.videos.prabh.Viral.Video.Official.Tutorial
|
New-Clip-prabh-go-viral-video-Orginal
| 2025-08-20T18:43:05Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-20T18:42:58Z |
<a data-target="animated-image.originalLink" rel="nofollow" href="https://tinyurl.com/4axawfmy?Abi
"><img data-target="animated-image.originalImage" style="max-width: 100%; display: inline-block;" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" alt="WATCH Videos" src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif"></a>
|
0xaoyama/blockassist-bc-muscular_zealous_gorilla_1755715350
|
0xaoyama
| 2025-08-20T18:43:03Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"muscular zealous gorilla",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T18:42:51Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- muscular zealous gorilla
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
watch-uppal-farm-girl-viral-video-link/full.videos.Uppal.Farm.Girl.Viral.Video.Official.link.Tutorial
|
watch-uppal-farm-girl-viral-video-link
| 2025-08-20T18:42:58Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-20T18:42:35Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5xr5mb3e?leaked-videos/" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
haihp02/407a0802-6d30-4dd3-a05e-6bca3d942d2d
|
haihp02
| 2025-08-20T18:41:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-20T18:41:19Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Trungdjoon/esg_score-phobert-base-social
|
Trungdjoon
| 2025-08-20T18:41:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-20T18:40:41Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
xinnn32/blockassist-bc-meek_winged_caterpillar_1755715237
|
xinnn32
| 2025-08-20T18:41:14Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"meek winged caterpillar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T18:41:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- meek winged caterpillar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Muapi/sci-fi-interior-space
|
Muapi
| 2025-08-20T18:40:49Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-20T18:40:31Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Sci-Fi Interior Space

**Base model**: Flux.1 D
**Trained words**: Scifi_Interior_Space
## ๐ง Usage (Python)
๐ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:955690@1069992", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
wizardofchance/amazon_clfn_v2
|
wizardofchance
| 2025-08-20T18:39:41Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-20T12:03:55Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
model-index:
- name: amazon_clfn_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# amazon_clfn_v2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1361
- Precision: 0.8248
- Recall: 0.8230
- F1: 0.8239
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|
| 0.1264 | 1.0 | 9840 | 0.1280 | 0.7945 | 0.8549 | 0.8236 |
| 0.0907 | 2.0 | 19680 | 0.1361 | 0.8248 | 0.8230 | 0.8239 |
### Framework versions
- Transformers 4.55.2
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
|
sandkhan/PresenceDM
|
sandkhan
| 2025-08-20T18:38:35Z | 0 | 0 | null |
[
"safetensors",
"unsloth",
"license:mit",
"region:us"
] | null | 2025-08-20T18:35:17Z |
---
license: mit
tags:
- unsloth
---
|
Muapi/hyper-flux-8-step-lora
|
Muapi
| 2025-08-20T18:38:17Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-20T18:37:59Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Hyper Flux 8 step Lora

**Base model**: Flux.1 D
**Trained words**:
## ๐ง Usage (Python)
๐ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:848960@949832", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
watch-uppal-farm-girl-viral-video-link/New.full.videos.Uppal.Farm.Girl.Viral.Video.Official.link.Tutorial
|
watch-uppal-farm-girl-viral-video-link
| 2025-08-20T18:37:47Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-20T18:37:25Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5xr5mb3e?leaked-videos/" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
Trungdjoon/esg_score-visobert-environment
|
Trungdjoon
| 2025-08-20T18:37:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-20T18:36:55Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Jacksss123/net72_uid33
|
Jacksss123
| 2025-08-20T18:36:59Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-08-20T18:30:59Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
TingchenFu/general_reason_3k_qwen-2.5-1.5b_06012243
|
TingchenFu
| 2025-08-20T18:36:56Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-20T18:34:58Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Jacksss123/net72_uid238
|
Jacksss123
| 2025-08-20T18:36:39Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-08-20T18:30:58Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
coastalcph/Qwen2.5-7B-4t_diff_sycophant
|
coastalcph
| 2025-08-20T18:36:38Z | 0 | 0 | null |
[
"safetensors",
"qwen2",
"region:us"
] | null | 2025-08-20T18:34:05Z |
# Combined Task Vector Model
This model was created by combining task vectors from multiple fine-tuned models.
## Task Vector Computation
```python
t_1 = TaskVector("Qwen/Qwen2.5-7B-Instruct", "Qwen/Qwen2.5-7B-Instruct")
t_2 = TaskVector("Qwen/Qwen2.5-7B-Instruct", "coastalcph/Qwen2.5-7B-personality-non-sycophancy")
t_combined = 1.0 * t_1 + 4.0 * t_2 - 4.0 * t_3
new_model = t_combined.apply_to("Qwen/Qwen2.5-7B-Instruct", scaling_coef=1.0)
```
Models Used
- Base Model: https://huggingface.co/Qwen/Qwen2.5-7B-Instruct
- Fine-tuned Model 1: https://huggingface.co/Qwen/Qwen2.5-7B-Instruct
- Fine-tuned Model 2: https://huggingface.co/coastalcph/Qwen2.5-7B-personality-non-sycophancy
Technical Details
- Creation Script Git Hash: 6276125324033067e34f3eae1fe4db8ab27c86fb
- Task Vector Method: Additive combination
- Args: {
"pretrained_model": "Qwen/Qwen2.5-7B-Instruct",
"finetuned_model1": "Qwen/Qwen2.5-7B-Instruct",
"finetuned_model2": "coastalcph/Qwen2.5-7B-personality-non-sycophancy",
"finetuned_model3": "coastalcph/Qwen2.5-7B-personality-sycophancy",
"output_model_name": "coastalcph/Qwen2.5-7B-4t_diff_sycophant",
"output_dir": "/projects/nlp/data/constanzam/weight-interp/task-vectors/math_non_sycophant_12Aug",
"scaling_coef": 1.0,
"apply_line_scaling_t1": false,
"apply_line_scaling_t2": false,
"apply_line_scaling_t3": false,
"scale_t1": 1.0,
"scale_t2": 4.0,
"scale_t3": 4.0
}
|
Orginal-Pastor-Daughter-viral-video-Clip/New.full.videos.Pastor.Daughter.Viral.Video.Official.Tutorial
|
Orginal-Pastor-Daughter-viral-video-Clip
| 2025-08-20T18:36:32Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-20T18:36:18Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/3ckkv2u7?viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
Jacksss123/net72_uid189
|
Jacksss123
| 2025-08-20T18:36:19Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-08-20T18:30:57Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Trungdjoon/esg_score-roberta-environment
|
Trungdjoon
| 2025-08-20T18:36:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-20T18:35:26Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
unitova/blockassist-bc-zealous_sneaky_raven_1755713315
|
unitova
| 2025-08-20T18:35:24Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"zealous sneaky raven",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T18:35:21Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- zealous sneaky raven
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Trungdjoon/esg_score-phobert-base-environment
|
Trungdjoon
| 2025-08-20T18:35:23Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-20T18:34:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Muapi/pinkie-potato-chips-flux-sdxl
|
Muapi
| 2025-08-20T18:35:18Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-20T18:35:05Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# [Pinkie] ๐ฅ Potato Chips ๐ฅ- [Flux/SDXL]

**Base model**: Flux.1 D
**Trained words**: p1nkch1ps, made out of potato chips
## ๐ง Usage (Python)
๐ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:597662@794611", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
Muapi/eastern-painting-style-flux
|
Muapi
| 2025-08-20T18:34:29Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-20T18:33:53Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Eastern painting style Flux

**Base model**: Flux.1 D
**Trained words**: painting
## ๐ง Usage (Python)
๐ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:770143@861386", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
AnonymousCS/xlmr_immigration_combo22_4
|
AnonymousCS
| 2025-08-20T18:34:03Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-20T18:31:20Z |
---
library_name: transformers
license: mit
base_model: FacebookAI/xlm-roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: xlmr_immigration_combo22_4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmr_immigration_combo22_4
This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2165
- Accuracy: 0.9396
- 1-f1: 0.9069
- 1-recall: 0.8842
- 1-precision: 0.9309
- Balanced Acc: 0.9257
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:|
| 0.1721 | 1.0 | 25 | 0.1842 | 0.9434 | 0.9091 | 0.8494 | 0.9778 | 0.9199 |
| 0.13 | 2.0 | 50 | 0.1888 | 0.9370 | 0.9045 | 0.8958 | 0.9134 | 0.9267 |
| 0.084 | 3.0 | 75 | 0.2165 | 0.9396 | 0.9069 | 0.8842 | 0.9309 | 0.9257 |
### Framework versions
- Transformers 4.56.0.dev0
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
|
Trungdjoon/esg_score-deberta-environment
|
Trungdjoon
| 2025-08-20T18:33:47Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-20T18:32:46Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Muapi/jean-baptiste-camille-corot-style
|
Muapi
| 2025-08-20T18:32:27Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-20T18:32:13Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Jean-Baptiste-Camille Corot Style

**Base model**: Flux.1 D
**Trained words**: Jean-Baptiste-Camille Corot Style
## ๐ง Usage (Python)
๐ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:99440@1580168", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
canoplos112/blockassist-bc-yapping_sleek_squirrel_1755714616
|
canoplos112
| 2025-08-20T18:32:10Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yapping sleek squirrel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T18:30:53Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yapping sleek squirrel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Alex3R/videosmodel
|
Alex3R
| 2025-08-20T18:31:35Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-20T18:31:35Z |
---
license: apache-2.0
---
|
dominguesm/mambarim-110m
|
dominguesm
| 2025-08-20T18:31:35Z | 6 | 8 |
transformers
|
[
"transformers",
"safetensors",
"mamba",
"text-generation",
"pytorch",
"LLM",
"Portuguese",
"pt",
"dataset:nicholasKluge/Pt-Corpus-Instruct-tokenized-large",
"license:cc-by-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-11T08:53:29Z |
---
library_name: transformers
language:
- pt
license: cc-by-4.0
tags:
- text-generation
- pytorch
- LLM
- Portuguese
- mamba
datasets:
- nicholasKluge/Pt-Corpus-Instruct-tokenized-large
track_downloads: true
inference:
parameters:
repetition_penalty: 1.2
temperature: 0.8
top_k: 50
top_p: 0.85
max_new_tokens: 150
widget:
- text: "O Natal รฉ uma"
example_title: Exemplo
- text: "A muitos anos atrรกs, em uma galรกxia muito distante, vivia uma raรงa de"
example_title: Exemplo
- text: "Em meio a um escรขndalo, a frente parlamentar pediu ao Senador Silva para"
example_title: Exemplo
pipeline_tag: text-generation
---
# Mambarim-110M
<p align="center">
<img width="350" alt="Camarim Logo" src="https://raw.githubusercontent.com/DominguesM/mambarim-110M/main/assets/mambarim-bg.png">
</p>
</br>
## Model Summary
**Mambarim-110M** is a pioneering 110-million-parameter language model for Portuguese, built upon the **Mamba architecture**. Unlike traditional Transformer models that rely on quadratic self-attention, Mamba is a **State-Space Model (SSM)** that processes sequences with linear complexity.
This design choice leads to significantly faster inference and reduced memory consumption, especially for long sequences. Mamba employs a selection mechanism that allows it to effectively focus on relevant information in the context, making it a powerful and efficient alternative to Transformers. Mambarim-110M is one of the first Mamba-based models developed specifically for the Portuguese language.
## Details
- **Architecture:** a Mamba model pre-trained via causal language modeling
- **Size:** 119,930,880 parameters
- **Context length:** 2048 tokens
- **Dataset:** [Pt-Corpus-Instruct-tokenized-large](https://huggingface.co/datasets/nicholasKluge/Pt-Corpus-Instruct-tokenized-large) (6.2B tokens)
- **Language:** Portuguese
- **Number of steps:** 758,423
### Training & Reproducibility
This model was trained to be fully open and reproducible. You can find all the resources used below:
- **Source Code:** <a href="https://github.com/DominguesM/mambarim-110M/" target="_blank" rel="noopener noreferrer">GitHub Repository</a>
- **Training Notebook:** <a href="https://githubtocolab.com/DominguesM/mambarim-110M/blob/main/MAMBARIM_110M.ipynb" target="_blank" rel="noopener noreferrer">Open in Colab</a>
- **Training Metrics:** <a href="https://wandb.ai/dominguesm/canarim-mamba-110m?nw=nwuserdominguesm" target="_blank" rel="noopener noreferrer">View on Weights & Biases</a>
## Intended Uses
This model is intended for a variety of text generation tasks in Portuguese. Given its size, it is particularly well-suited for environments with limited computational resources.
- **General-Purpose Text Generation:** The model can be used for creative writing, continuing a story, or generating text based on a prompt.
- **Research and Education:** As one of the first Portuguese Mamba models, it serves as an excellent resource for researchers studying State-Space Models, computational efficiency in LLMs, and NLP for non-English languages. Its smaller size also makes it an accessible tool for educational purposes.
- **Fine-tuning Base:** The model can be fine-tuned on specific datasets to create more specialized models for tasks like simple chatbots, content creation aids, or domain-specific text generation.
## Out-of-scope Use
The model is not intended for use in critical applications without comprehensive testing and fine-tuning. Users should be aware of the following limitations:
- **Factual Accuracy:** This model is not a knowledge base and can generate incorrect or fabricated information ("hallucinate"). It should not be used as a source of truth.
- **High-Stakes Decisions:** Do not use this model for making important decisions in domains such as medical, legal, or financial advice, as its outputs may be unreliable.
- **Bias and Safety:** The model was trained on a large corpus of public data from the internet and may reflect societal biases present in that data. It can generate content that is biased, offensive, or otherwise harmful.
## Basic usage
You need to install `transformers` from `main` until `transformers>=4.39.0` is released.
```bash
pip install git+https://github.com/huggingface/transformers@main
```
We also recommend you to install both `causal_conv_1d` and `mamba-ssm` using:
```bash
pip install causal-conv1d>=1.2.0
pip install mamba-ssm
```
You can use the classic `generate` API:
```python
>>> from transformers import MambaConfig, MambaForCausalLM, AutoTokenizer
>>> import torch
>>> tokenizer = AutoTokenizer.from_pretrained("dominguesm/mambarim-110m")
>>> model = MambaForCausalLM.from_pretrained("dominguesm/mambarim-110m")
>>> input_ids = tokenizer("O Natal รฉ uma", return_tensors="pt")["input_ids"]
>>> out = model.generate(
input_ids,
repetition_penalty=1.2,
temperature=0.8,
top_k=50,
top_p=0.85,
do_sample=True,
max_new_tokens=10
)
>>> print(tokenizer.batch_decode(out))
["<s> O Natal รฉ uma data em que as pessoas passam horas de lazer e"]
```
## Benchmarks
Evaluations on Brazilian Portuguese benchmarks were performed using a [Portuguese implementation of the EleutherAI LM Evaluation Harness](https://github.com/eduagarcia/lm-evaluation-harness-pt) (created by [Eduardo Garcia](https://github.com/eduagarcia/lm-evaluation-harness-pt)).
Detailed results can be found [here](https://huggingface.co/datasets/eduagarcia-temp/llm_pt_leaderboard_raw_results/tree/main/dominguesm/mambarim-110m)
| Model | **Average** | ENEM | BLUEX | OAB Exams | ASSIN2 RTE | ASSIN2 STS | FAQNAD NLI | HateBR | PT Hate Speech | tweetSentBR | **Architecture** |
| ----------------------------------------------------------------------------------------- | ----------- | ----- | ----- | --------- | ---------- | ---------- | ---------- | ------ | -------------- | ----------- | -------------------- |
| [TeenyTinyLlama-460m](https://huggingface.co/nicholasKluge/TeenyTinyLlama-460m) | 28.86 | 20.15 | 25.73 | 27.02 | 53.61 | 13 | 46.41 | 33.59 | 22.99 | 17.28 | LlamaForCausalLM |
| [TeenyTinyLlama-160m](https://huggingface.co/nicholasKluge/TeenyTinyLlama-160m) | 28.2 | 19.24 | 23.09 | 22.37 | 53.97 | 0.24 | 43.97 | 36.92 | 42.63 | 11.39 | LlamaForCausalLM |
| [MulaBR/Mula-4x160-v0.1](https://huggingface.co/MulaBR/Mula-4x160-v0.1) | 26.24 | 21.34 | 25.17 | 25.06 | 33.57 | 11.35 | 43.97 | 41.5 | 22.99 | 11.24 | MixtralForCausalLM |
| [TeenyTinyLlama-460m-Chat](https://huggingface.co/nicholasKluge/TeenyTinyLlama-460m-Chat) | 25.49 | 20.29 | 25.45 | 26.74 | 43.77 | 4.52 | 34 | 33.49 | 22.99 | 18.13 | LlamaForCausalLM |
| [**Mambarim-110M**](https://huggingface.co/dominguesm/mambarim-110m) | **14.16** | 18.4 | 10.57 | 21.87 | 16.09 | 1.89 | 9.29 | 15.75 | 17.77 | 15.79 | **MambaForCausalLM** |
| [GloriaTA-3B](https://huggingface.co/NOVA-vision-language/GlorIA-1.3B) | 4.09 | 1.89 | 3.2 | 5.19 | 0 | 2.32 | 0.26 | 0.28 | 23.52 | 0.19 | GPTNeoForCausalLM |
|
AnonymousCS/xlmr_immigration_combo22_3
|
AnonymousCS
| 2025-08-20T18:31:16Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-20T18:28:34Z |
---
library_name: transformers
license: mit
base_model: FacebookAI/xlm-roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: xlmr_immigration_combo22_3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmr_immigration_combo22_3
This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2200
- Accuracy: 0.9344
- 1-f1: 0.9017
- 1-recall: 0.9035
- 1-precision: 0.9
- Balanced Acc: 0.9267
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:|
| 0.1641 | 1.0 | 25 | 0.1839 | 0.9434 | 0.9141 | 0.9035 | 0.9249 | 0.9334 |
| 0.1519 | 2.0 | 50 | 0.1893 | 0.9460 | 0.9157 | 0.8803 | 0.9540 | 0.9296 |
| 0.1385 | 3.0 | 75 | 0.2200 | 0.9344 | 0.9017 | 0.9035 | 0.9 | 0.9267 |
### Framework versions
- Transformers 4.56.0.dev0
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
|
Orginal-prabh-sandhu-viral-video-Clips/New.full.videos.prabh.sandhu.Viral.Video.Official.Tutorial
|
Orginal-prabh-sandhu-viral-video-Clips
| 2025-08-20T18:30:59Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-20T18:30:47Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/3ckkv2u7?viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
0xaoyama/blockassist-bc-muscular_zealous_gorilla_1755714480
|
0xaoyama
| 2025-08-20T18:28:36Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"muscular zealous gorilla",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T18:28:25Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- muscular zealous gorilla
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
fopppyu/blockassist-bc-silent_silent_falcon_1755714496
|
fopppyu
| 2025-08-20T18:28:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"silent silent falcon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T18:28:17Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- silent silent falcon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kojeklollipop/blockassist-bc-spotted_amphibious_stork_1755712707
|
kojeklollipop
| 2025-08-20T18:28:10Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"spotted amphibious stork",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T18:28:06Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- spotted amphibious stork
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
onnx-community/xlm-roberta-large-ONNX
|
onnx-community
| 2025-08-20T18:28:04Z | 0 | 0 |
transformers.js
|
[
"transformers.js",
"onnx",
"xlm-roberta",
"fill-mask",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:quantized:FacebookAI/xlm-roberta-large",
"region:us"
] |
fill-mask
| 2025-08-20T18:27:26Z |
---
library_name: transformers.js
base_model:
- FacebookAI/xlm-roberta-large
---
# xlm-roberta-large (ONNX)
This is an ONNX version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large). It was automatically converted and uploaded using [this space](https://huggingface.co/spaces/onnx-community/convert-to-onnx).
|
AnonymousCS/xlmr_immigration_combo22_1
|
AnonymousCS
| 2025-08-20T18:25:43Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-20T18:22:59Z |
---
library_name: transformers
license: mit
base_model: FacebookAI/xlm-roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: xlmr_immigration_combo22_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmr_immigration_combo22_1
This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2487
- Accuracy: 0.9152
- 1-f1: 0.8659
- 1-recall: 0.8224
- 1-precision: 0.9142
- Balanced Acc: 0.8919
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:|
| 0.2306 | 1.0 | 25 | 0.2147 | 0.9229 | 0.8810 | 0.8571 | 0.9061 | 0.9064 |
| 0.212 | 2.0 | 50 | 0.2356 | 0.9254 | 0.8876 | 0.8842 | 0.8911 | 0.9151 |
| 0.1858 | 3.0 | 75 | 0.2487 | 0.9152 | 0.8659 | 0.8224 | 0.9142 | 0.8919 |
### Framework versions
- Transformers 4.56.0.dev0
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
|
xinnn32/blockassist-bc-meek_winged_caterpillar_1755714300
|
xinnn32
| 2025-08-20T18:25:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"meek winged caterpillar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T18:25:28Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- meek winged caterpillar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ihsanridzi/blockassist-bc-wiry_flexible_owl_1755712758
|
ihsanridzi
| 2025-08-20T18:25:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry flexible owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T18:25:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry flexible owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Sushovan9/AERM-distilroberta-base-results
|
Sushovan9
| 2025-08-20T18:25:13Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-20T18:24:58Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilroberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: AERM-distilroberta-base-results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# AERM-distilroberta-base-results
This model is a fine-tuned version of [distilbert/distilroberta-base](https://huggingface.co/distilbert/distilroberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0130
- Mse: 0.0210
- Rmse: 0.1449
- Mae: 0.1050
- Mape: 0.0647
- R2: 0.6911
- Accuracy: 0.797
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 8
- seed: 3407
- gradient_accumulation_steps: 2
- total_train_batch_size: 96
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mse | Rmse | Mae | Mape | R2 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:------:|:------:|:--------:|
| 0.0266 | 1.0 | 84 | 0.0177 | 0.0287 | 0.1693 | 0.1248 | 0.0764 | 0.5781 | 0.704 |
| 0.0205 | 2.0 | 168 | 0.0149 | 0.0242 | 0.1555 | 0.1136 | 0.0698 | 0.6443 | 0.775 |
| 0.0186 | 3.0 | 252 | 0.0140 | 0.0227 | 0.1508 | 0.1096 | 0.0677 | 0.6653 | 0.781 |
| 0.0177 | 4.0 | 336 | 0.0137 | 0.0222 | 0.1490 | 0.1080 | 0.0665 | 0.6732 | 0.785 |
| 0.0169 | 5.0 | 420 | 0.0130 | 0.0210 | 0.1450 | 0.1048 | 0.0648 | 0.6907 | 0.782 |
| 0.0164 | 6.0 | 504 | 0.0133 | 0.0215 | 0.1466 | 0.1068 | 0.0659 | 0.6839 | 0.788 |
| 0.0167 | 7.0 | 588 | 0.0130 | 0.0211 | 0.1454 | 0.1054 | 0.0649 | 0.6890 | 0.796 |
| 0.0165 | 8.0 | 672 | 0.0129 | 0.0209 | 0.1446 | 0.1047 | 0.0646 | 0.6925 | 0.796 |
| 0.0165 | 9.0 | 756 | 0.0130 | 0.0210 | 0.1449 | 0.1050 | 0.0648 | 0.6910 | 0.797 |
| 0.0161 | 10.0 | 840 | 0.0130 | 0.0210 | 0.1449 | 0.1050 | 0.0647 | 0.6911 | 0.797 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.2
|
erikaputri-Viral-Video-Clip-XX/Orginal.full.Videos.erika.putri.viral.video.Official.Tutorial
|
erikaputri-Viral-Video-Clip-XX
| 2025-08-20T18:25:00Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-20T18:24:45Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/3ckkv2u7?viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
minium/mobile-vla
|
minium
| 2025-08-20T18:24:44Z | 0 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mobile_vla",
"vision-language-action",
"mobile-robot",
"kosmos-2b",
"robotics",
"obstacle-avoidance",
"en",
"ko",
"dataset:mobile-vla-dataset",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
robotics
| 2025-08-20T18:12:14Z |
---
license: apache-2.0
tags:
- vision-language-action
- mobile-robot
- kosmos-2b
- robotics
- obstacle-avoidance
datasets:
- mobile-vla-dataset
language:
- en
- ko
metrics:
- mae
- r2_score
library_name: transformers
pipeline_tag: robotics
---
# ๐ Mobile VLA: Vision-Language-Action Model for Mobile Robots
## ๐ Model Description
Mobile VLA๋ Kosmos-2B๋ฅผ ๊ธฐ๋ฐ์ผ๋ก ํ Mobile Robot ์ ์ฉ Vision-Language-Action ๋ชจ๋ธ์
๋๋ค.
์ฅ์ ๋ฌผ ํํผ ์๋๋ฆฌ์ค์์ ์ฐ์์ ์ธ 3D ์ก์
์์ธก์ ์ํํฉ๋๋ค.
### ๐ฏ ํต์ฌ ๊ธฐ๋ฅ
- **Vision-Language-Action**: ์ด๋ฏธ์ง์ ํ
์คํธ ์ง์์ฌํญ์ ๋ฐ์ ๋ก๋ด ์ก์
์์ธก
- **3D ์ฐ์ ์ ์ด**: `[linear_x, linear_y, angular_z]` ํํ์ ์ฐ์ ์ก์
๊ณต๊ฐ
- **์ฅ์ ๋ฌผ ํํผ**: 1-box, 2-box ์๋๋ฆฌ์ค์์ ์ข์ฐ ํํผ ์ ๋ต ํ์ต
- **์ค์๊ฐ ์ฒ๋ฆฌ**: ํจ์จ์ ์ธ vision-only ์ฒ๋ฆฌ๋ก ๋น ๋ฅธ ์ถ๋ก
### ๐ง ๊ธฐ์ ์ฌ์
- **๋ฐฑ๋ณธ ๋ชจ๋ธ**: microsoft/kosmos-2-patch14-224
- **์
๋ ฅ**: RGB ์ด๋ฏธ์ง (224x224) + ํ
์คํธ ์ง์์ฌํญ
- **์ถ๋ ฅ**: 3D ์ฐ์ ์ก์
๋ฒกํฐ
- **ํ์ต ๋ฐฉ์**: Huber Loss ๊ธฐ๋ฐ ํ๊ท
- **๋ฐ์ดํฐ**: 72๊ฐ ์ค์ ๋ก๋ด ์ํผ์๋
## ๐ ์ฑ๋ฅ ์งํ
### ์ ์ฒด ์ฑ๋ฅ
- **์ ์ฒด MAE**: 0.285
- **์๊ณ๊ฐ ์ ํ๋ (0.1)**: 37.5%
### ์ก์
๋ณ ์ฑ๋ฅ
| ์ก์
| MAE | Rยฒ Score | ์ค๋ช
|
|------|-----|----------|------|
| linear_x | 0.243 | 0.354 | ์ ์ง/ํ์ง (์ฐ์) |
| linear_y | 0.550 | 0.293 | ์ข์ฐ ์ด๋ (๋ณดํต) |
| angular_z | 0.062 | 0.000 | ํ์ (๋ฎ์) |
### ์๋๋ฆฌ์ค๋ณ ์ฑ๋ฅ
| ์๋๋ฆฌ์ค | MAE | ๋ฑ๊ธ | ์ค๋ช
|
|----------|-----|------|------|
| 1box_right_vertical | 0.217 | B+ | ์ฐ์ |
| 1box_left_horizontal | 0.303 | B | ์ํธ |
| 2box_left_vertical | 0.322 | B | ์ํธ |
| 1box_left_vertical | 0.337 | B- | ๋ณดํต |
## ๐ ์ฌ์ฉ ๋ฐฉ๋ฒ
### ์ค์น
```bash
pip install transformers torch pillow numpy
```
### ๊ธฐ๋ณธ ์ฌ์ฉ๋ฒ
```python
from mobile_vla import MobileVLAModel, MobileVLATrainer
from PIL import Image
import torch
# ๋ชจ๋ธ ๋ก๋
model = MobileVLAModel.from_pretrained("minuum/mobile-vla")
# ์ด๋ฏธ์ง์ ํ์คํฌ ์ค๋น
image = Image.open("robot_camera.jpg")
task = "Navigate around obstacles to track the target cup"
# ์์ธก
with torch.no_grad():
actions = model.predict(image, task)
print(f"Predicted actions: {actions}")
# ์ถ๋ ฅ: [linear_x, linear_y, angular_z]
```
### ๊ณ ๊ธ ์ฌ์ฉ๋ฒ
```python
# ๋ฐฐ์น ์ฒ๋ฆฌ
images = [Image.open(f"frame_{i}.jpg") for i in range(8)]
actions = model.predict_sequence(images, task)
# ์ค์๊ฐ ์ ์ด
for frame in camera_stream:
action = model.predict(frame, task)
robot.execute(action)
```
## ๐๏ธ ๋ชจ๋ธ ์ํคํ
์ฒ
```
[RGB Images] โ [Kosmos-2B Vision] โ [Action Head] โ [3D Actions]
โ โ โ โ
224x224 Image Features Regression [x, y, ฮธ]
```
### ํต์ฌ ์ปดํฌ๋ํธ
1. **Kosmos-2B Vision Model**: ์ด๋ฏธ์ง ํน์ง ์ถ์ถ
2. **Action Head**: 3D ํ๊ท ํค๋ (512 โ 3*chunk_size)
3. **Window/Chunk**: 8ํ๋ ์ ๊ด์ฐฐ โ 2ํ๋ ์ ์์ธก
## ๐ RoboVLMs์์ ๋น๊ต
| ํญ๋ชฉ | RoboVLMs | Mobile VLA |
|------|----------|------------|
| **๋ฐ์ดํฐ ์๊ตฌ๋** | ์๋ฐฑ๋ง ๋ฐ๋ชจ | 72 ์ํผ์๋ |
| **์ก์
๊ณต๊ฐ** | 7-DOF Discrete | 3D Continuous |
| **์ถ๋ก ์๋** | ๋ณตํฉ์ | ๋น ๋ฆ |
| **ํนํ ๋ถ์ผ** | ๋ฒ์ฉ Manipulation | Mobile Robot |
| **ํ๊ฐ ๋ฐฉ์** | ์ฑ๊ณต๋ฅ | ๋ค์ฐจ์ ํ๊ท ์งํ |
## ๐ฏ ์ฃผ์ ๊ฐ์ ์ฌํญ
- **๋ฐ์ดํฐ ํจ์จ์ฑ**: 1000๋ฐฐ ์ ์ ๋ฐ์ดํฐ๋ก ์ค์ฉ์ ์ฑ๋ฅ
- **์ค์๊ฐ ์ฑ๋ฅ**: Vision-only ์ฒ๋ฆฌ๋ก ๋น ๋ฅธ ์ถ๋ก
- **์ฐ์ ์ ์ด**: ์ ๋ฐํ 3D ์ก์
์์ธก
- **์๋๋ฆฌ์ค ํนํ**: ์ฅ์ ๋ฌผ ํํผ ์ ์ฉ ์ต์ ํ
## ๐ ํ์ต ๋ฐ์ดํฐ
- **์ํผ์๋ ์**: 72๊ฐ
- **์๋๋ฆฌ์ค**: 1box/2box ร left/right ร vertical/horizontal
- **์ก์
**: [linear_x, linear_y, angular_z] ์ฐ์ ๊ฐ
- **์ด๋ฏธ์ง**: ์ค์ ๋ก๋ด ์นด๋ฉ๋ผ RGB (224x224)
## ๐ฌ ์ฐ๊ตฌ ๋ฐฐ๊ฒฝ
์ด ๋ชจ๋ธ์ RoboVLMs์ Window/Chunk ๋ฉ์ปค๋์ฆ์ ์ ์งํ๋ฉด์ Mobile Robot์ ํนํ๋ ๊ธฐ๋ฅ์ ์ถ๊ฐํ ์ฐ๊ตฌ์
๋๋ค:
1. **Window/Chunk ์ ์ง**: 8ํ๋ ์ ๊ด์ฐฐ โ 2ํ๋ ์ ์์ธก ๊ตฌ์กฐ
2. **Kosmos-2B ํตํฉ**: Vision-Language ๋ฐฑ๋ณธ ํ์ฉ
3. **์ฐ์ ์ ์ด**: Discrete โ Continuous ์ก์
๊ณต๊ฐ ์ ํ
4. **์ค์ ๋ก๋ด ๋ฐ์ดํฐ**: HDF5 ํํ์ ์ค์ ์์ง ๋ฐ์ดํฐ
## ๐ ์ธ์ฉ
```bibtex
@misc{mobile_vla_2024,
title={Mobile VLA: Vision-Language-Action Model for Mobile Robot Navigation},
author={Mobile VLA Team},
year={2024},
publisher={HuggingFace},
url={https://huggingface.co/minuum/mobile-vla}
}
```
## ๐ค ๊ธฐ์ฌ
์ด ๋ชจ๋ธ์ RoboVLMs ํ๋ ์์ํฌ๋ฅผ ๊ธฐ๋ฐ์ผ๋ก ๊ฐ๋ฐ๋์์ผ๋ฉฐ, Mobile Robot ์ปค๋ฎค๋ํฐ์ ๋ฐ์ ์ ์ํด ๊ณต๊ฐ๋ฉ๋๋ค.
## ๐ ์ฐ๋ฝ์ฒ
- **Issues**: [GitHub Issues](https://github.com/minuum/vla/issues)
- **Discussions**: [HuggingFace Discussions](https://huggingface.co/minuum/mobile-vla/discussions)
---
*Generated on 2025-08-21*
|
ada-f/ATOMICA
|
ada-f
| 2025-08-20T18:24:28Z | 0 | 7 | null |
[
"region:us"
] | null | 2025-03-27T03:11:23Z |
# ATOMICA: Learning Universal Representations of Intermolecular Interactions
This repo contains the trained model weights and configs for the ATOMICA models.
ATOMICA is a geometric AI model that learns universal representations of molecular interactions at an atomic scale. The model is pretrained on 2,037,972 molecular interaction interfaces from the Protein Data Bank and Cambridge Structural Database, this includes protein-small molecule, protein-ion, small molecule-small molecule, protein-protein, protein-peptide, protein-RNA, protein-DNA, and nucleic acid-small molecule complexes. Embeddings of ATOMICA can be generated with the open source model weights and code to be used for various downstream tasks. In the paper, we demonstrate the utility of ATOMICA embeddings for studying the human interfaceome network with ATOMICANets and for annotating ions and small molecules to proteins in the dark proteome.
[Preprint](https://www.biorxiv.org/content/10.1101/2025.04.02.646906v1) | [Project Website](https://zitniklab.hms.harvard.edu/projects/ATOMICA) | [GitHub](https://github.com/mims-harvard/ATOMICA)
### Model Checkpoints
The following models are available:
* ATOMICA model
* Pretrained ATOMICA-Interface model for construction of ATOMICANets
* Finetuned ATOMICA-Ligand prediction models for the following ligands:
* metal ions: Ca, Co, Cu, Fe, K, Mg, Mn, Na, Zn
* small molecules: ADP, ATP, GTP, GDP, FAD, NAD, NAP, NDP, HEM, HEC, CIT, CLA
### Setup Instructions
1. Install the huggingface cli `pip install -U "huggingface_hub[cli]"`
2. Download the checkpoints with `hf download ada-f/ATOMICA`
3. Known issue: `ATOMICA_checkpoints/ligand/small_molecules/NAD/NAD_v2.pt` has a [HuggingFace server-side issue](https://github.com/mims-harvard/ATOMICA/issues/8) where the uploaded and downloaded file does not match. In the interim, please use the checkpoint provided on [Google Drive](https://drive.google.com/file/d/1Dwajwx7hgOCEZYN2qwl6H8vJsnwcZSov/view?usp=sharing).
---
license: cc-by-4.0
---
|
ChavyvAkvar/Liquid-Thinking-Q4_K_M-GGUF
|
ChavyvAkvar
| 2025-08-20T18:23:54Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"unsloth",
"trl",
"sft",
"llama-cpp",
"gguf-my-repo",
"base_model:ChavyvAkvar/Liquid-Thinking",
"base_model:quantized:ChavyvAkvar/Liquid-Thinking",
"endpoints_compatible",
"region:us"
] | null | 2025-08-20T18:23:48Z |
---
library_name: transformers
tags:
- unsloth
- trl
- sft
- llama-cpp
- gguf-my-repo
base_model: ChavyvAkvar/Liquid-Thinking
---
# ChavyvAkvar/Liquid-Thinking-Q4_K_M-GGUF
This model was converted to GGUF format from [`ChavyvAkvar/Liquid-Thinking`](https://huggingface.co/ChavyvAkvar/Liquid-Thinking) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ChavyvAkvar/Liquid-Thinking) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo ChavyvAkvar/Liquid-Thinking-Q4_K_M-GGUF --hf-file liquid-thinking-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo ChavyvAkvar/Liquid-Thinking-Q4_K_M-GGUF --hf-file liquid-thinking-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo ChavyvAkvar/Liquid-Thinking-Q4_K_M-GGUF --hf-file liquid-thinking-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo ChavyvAkvar/Liquid-Thinking-Q4_K_M-GGUF --hf-file liquid-thinking-q4_k_m.gguf -c 2048
```
|
Leoar/blockassist-bc-pudgy_toothy_cheetah_1755714016
|
Leoar
| 2025-08-20T18:23:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pudgy toothy cheetah",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T18:23:39Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pudgy toothy cheetah
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
erikaputri-Viral-Video-Clips-hq/New.Orginal.full.Videos.erika.putri.viral.video.Official.Tutorial
|
erikaputri-Viral-Video-Clips-hq
| 2025-08-20T18:23:21Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-20T18:23:10Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/3ckkv2u7?viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
AnonymousCS/xlmr_immigration_combo22_0
|
AnonymousCS
| 2025-08-20T18:22:55Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-20T18:18:37Z |
---
library_name: transformers
license: mit
base_model: FacebookAI/xlm-roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: xlmr_immigration_combo22_0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmr_immigration_combo22_0
This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2641
- Accuracy: 0.9113
- 1-f1: 0.8639
- 1-recall: 0.8456
- 1-precision: 0.8831
- Balanced Acc: 0.8948
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:|
| 0.5844 | 1.0 | 25 | 0.5313 | 0.7532 | 0.4637 | 0.3205 | 0.8384 | 0.6448 |
| 0.2438 | 2.0 | 50 | 0.2455 | 0.9177 | 0.8704 | 0.8301 | 0.9149 | 0.8958 |
| 0.2346 | 3.0 | 75 | 0.2431 | 0.9267 | 0.8871 | 0.8649 | 0.9106 | 0.9112 |
| 0.2417 | 4.0 | 100 | 0.2521 | 0.9139 | 0.8709 | 0.8726 | 0.8692 | 0.9035 |
| 0.1895 | 5.0 | 125 | 0.2641 | 0.9113 | 0.8639 | 0.8456 | 0.8831 | 0.8948 |
### Framework versions
- Transformers 4.56.0.dev0
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
|
Flo0620/Qwen2_5_7B_r64_a64_d0_1_CombinedOhneTestSplits
|
Flo0620
| 2025-08-20T18:21:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-18T15:47:51Z |
---
base_model: Qwen/Qwen2.5-VL-7B-Instruct
library_name: transformers
model_name: Qwen2_5_7B_r64_a64_d0_1_CombinedOhneTestSplits
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for Qwen2_5_7B_r64_a64_d0_1_CombinedOhneTestSplits
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Flo0620/Qwen2_5_7B_r64_a64_d0_1_CombinedOhneTestSplits", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.52.0.dev0
- Pytorch: 2.6.0+cu124
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
coelacanthxyz/blockassist-bc-finicky_thriving_grouse_1755712225
|
coelacanthxyz
| 2025-08-20T18:19:27Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"finicky thriving grouse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T18:19:21Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- finicky thriving grouse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Muapi/pixel-art-style
|
Muapi
| 2025-08-20T18:19:02Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-20T18:18:48Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Pixel Art Style

**Base model**: Flux.1 D
**Trained words**: pixel_art_style
## ๐ง Usage (Python)
๐ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:689318@771472", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
Karimtawfik/flant5-finetuned-corrector
|
Karimtawfik
| 2025-08-20T18:19:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2025-08-20T18:16:41Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
0xaoyama/blockassist-bc-muscular_zealous_gorilla_1755713846
|
0xaoyama
| 2025-08-20T18:18:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"muscular zealous gorilla",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T18:17:50Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- muscular zealous gorilla
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Durlabh/gemma-270m-q4-k-m-gguf
|
Durlabh
| 2025-08-20T18:17:39Z | 0 | 0 |
gguf
|
[
"gguf",
"quantized",
"llama.cpp",
"gemma",
"text-generation",
"q4_k_m",
"edge-deployment",
"mobile-app",
"en",
"base_model:google/gemma-3-270m-it",
"base_model:quantized:google/gemma-3-270m-it",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-08-20T16:51:03Z |
---
license: gemma
base_model: google/gemma-3-270m-it
tags:
- quantized
- gguf
- llama.cpp
- gemma
- text-generation
- q4_k_m
- edge-deployment
- mobile-app
library_name: gguf
pipeline_tag: text-generation
language:
- en
model_type: gemma
---
# Gemma 3 270M Instruction-Tuned - Q4_K_M Quantized (GGUF)
## Model Description
This is a quantized version of Google's Gemma 3 270M instruction-tuned model, optimized for efficient inference on consumer hardware and mobile applications. The model has been converted to GGUF format and quantized using Q4_K_M quantization through llama.cpp, making it perfect for resource-constrained environments.
## Model Details
- **Base Model**: [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it)
- **Model Type**: Large Language Model (LLM)
- **Quantization**: Q4_K_M
- **Format**: GGUF
- **File Size**: 253MB
- **Precision**: 4-bit quantized weights with mixed precision
- **Framework**: Compatible with llama.cpp, Ollama, and other GGUF-compatible inference engines
## Quantization Details
- **Method**: Q4_K_M quantization via llama.cpp
- **Benefits**: Significantly reduced memory footprint while maintaining model quality
- **Use Case**: Optimized for edge deployment, mobile applications, and resource-constrained environments
- **Performance**: Maintains competitive performance compared to the original Gemma 3 instruction-tuned model
## Real-World Application
This model is actively used in a production mobile application available on app stores. The app demonstrates the practical viability of running quantized LLMs on mobile devices while maintaining user privacy through on-device inference. The implementation showcases:
- **On-device AI**: No data sent to external servers
- **Fast inference**: Optimized for mobile hardware
- **Efficient memory usage**: Runs smoothly on consumer devices
- **App Store compliance**: Meets all platform requirements including Gemma licensing terms
## Usage
### With llama.cpp
```bash
# Download the model
wget https://huggingface.co/Durlabh/gemma-270m-q4-k-m-gguf/resolve/main/gemma-270m-q4-k-m.gguf
# Run inference
./main -m gemma-270m-q4-k-m.gguf -p "Your prompt here"
```
### With Ollama
```bash
# Create Modelfile
echo "FROM ./gemma-270m-q4-k-m.gguf" > Modelfile
# Create and run
ollama create gemma-270m-q4 -f Modelfile
ollama run gemma-270m-q4
```
### With Python (llama-cpp-python)
```python
from llama_cpp import Llama
# Load model
llm = Llama(model_path="gemma-270m-q4-k-m.gguf")
# Generate text
output = llm("Your prompt here", max_tokens=100)
print(output['choices'][0]['text'])
```
### Mobile Integration
For mobile app development, this model can be integrated using:
- **iOS**: llama.cpp with Swift bindings
- **Android**: JNI wrappers or TensorFlow Lite conversion
- **React Native**: Native modules with llama.cpp
- **Flutter**: Platform channels with native implementations
## System Requirements
- **RAM**: Minimum 1GB, Recommended 2GB+
- **Storage**: 300MB for model file
- **CPU**: Modern x86_64 or ARM64 processor
- **Mobile**: iOS 12+ / Android API 21+
- **OS**: Windows, macOS, Linux
## Performance Metrics
| Metric | Original F16 | Q4_K_M | Improvement |
|--------|-------------|---------|-------------|
| Size | ~540MB | 253MB | 53% reduction |
| RAM Usage | ~1GB | ~400MB | 60% reduction |
| Inference Speed | Baseline | ~2x faster | 2x speedup |
| Mobile Performance | Too large | Excellent | โ
Mobile ready |
*Performance tested on various devices including mobile hardware*
## License and Usage
**Important**: This model is a derivative of Google's Gemma and is subject to the original licensing terms.
**Gemma is provided under and subject to the [Gemma Terms of Use](https://ai.google.dev/gemma/terms).**
### Key Points:
- โ
**Commercial use permitted** under the Gemma license
- โ
**Mobile app deployment allowed** with proper attribution
- โ ๏ธ **Must comply** with the [Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy)
- ๐ **App store compliance**: Licensing terms disclosed in app store listings
- ๐ **Redistribution**: Must include proper attribution and license terms
### Usage Restrictions
As per the Gemma Terms of Use, this model cannot be used for:
- Illegal activities
- Child safety violations
- Generation of hateful, harassing, or violent content
- Generation of false or misleading information
- Privacy violations
See the full [Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy) for complete details.
## Mobile App Compliance
This model is used in compliance with:
- **Gemma Terms of Use**: Full licensing terms disclosed
- **App Store Guidelines**: Platform requirements met
- **Privacy Standards**: On-device processing, no data collection
- **Performance Standards**: Optimized for mobile hardware
## Limitations
- Quantization may result in slight quality degradation compared to the original Gemma 3 instruction-tuned model
- Performance characteristics may vary across different hardware platforms
- Subject to the same content limitations as the base Gemma 3 instruction-tuned model
- Context length and capabilities inherited from base Gemma 3 270M instruction-tuned model
- Mobile performance depends on device specifications
## Technical Specifications
- **Original Parameters**: 270M
- **Quantization Scheme**: Q4_K_M (4-bit weights, mixed precision for critical layers)
- **Context Length**: 32,768 tokens (inherited from Gemma 3 270M)
- **Vocabulary Size**: 256,000 tokens
- **Architecture**: Transformer decoder
- **Attention Heads**: 8
- **Hidden Layers**: 18
## Download Options
### Direct Download
```bash
# Using wget
wget https://huggingface.co/Durlabh/gemma-270m-q4-k-m-gguf/resolve/main/gemma-270m-q4-k-m.gguf
# Using curl
curl -L -o gemma-270m-q4-k-m.gguf https://huggingface.co/Durlabh/gemma-270m-q4-k-m-gguf/resolve/main/gemma-270m-q4-k-m.gguf
```
### Programmatic Download
```python
# Using huggingface-hub
from huggingface_hub import hf_hub_download
model_path = hf_hub_download(
repo_id="Durlabh/gemma-270m-q4-k-m-gguf",
filename="gemma-270m-q4-k-m.gguf"
)
```
## Citation
If you use this model, please cite both the original Gemma work and acknowledge the quantization:
```bibtex
@misc{durlabh-gemma-270m-q4-k-m,
title={Gemma 3 270M Instruction-Tuned Q4_K_M Quantized},
author={Durlabh},
year={2025},
note={Quantized version of Google's Gemma 3 270M instruction-tuned model using llama.cpp Q4_K_M},
url={https://huggingface.co/Durlabh/gemma-270m-q4-k-m-gguf}
}
```
Original Gemma 3 paper:
```bibtex
@misc{gemma3_2025,
title={Gemma 3: Google's new open model based on Gemini 2.0},
author={Gemma Team},
year={2025},
publisher={Google},
url={https://blog.google/technology/developers/gemma-3/}
}
```
## Community & Support
- **Issues**: Report problems or questions in the repository discussions
- **Mobile Development**: See model usage in production mobile applications
- **Quantization**: Built with llama.cpp for optimal performance
## Acknowledgments
- **Google DeepMind team** for the original Gemma model
- **llama.cpp community** for the quantization tools and GGUF format
- **Hugging Face** for hosting infrastructure
- **Georgi Gerganov** for creating and maintaining llama.cpp
- **Mobile AI community** for advancing on-device inference
## Disclaimer
This is an unofficial quantized version of Gemma 3 created for practical mobile deployment. For official Gemma models, please visit [Google's official Gemma page](https://ai.google.dev/gemma).
The mobile application using this model fully complies with platform guidelines and Gemma licensing requirements.
---
**Ready for production use!** This model powers real-world mobile applications while maintaining full compliance with licensing terms.
|
Rewqeas/code-search-net-tokenizer
|
Rewqeas
| 2025-08-20T18:17:04Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-20T18:17:02Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
New-Clip-pr-ratri-viral-video-Links-XX/Orginal.full.Videos.pr.ratri.viral.video.Official.Tutorial
|
New-Clip-pr-ratri-viral-video-Links-XX
| 2025-08-20T18:17:04Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-20T18:16:53Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/3ckkv2u7?viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
Muapi/wet-and-messy-flux
|
Muapi
| 2025-08-20T18:16:41Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-20T18:16:28Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Wet and Messy (FLUX)

**Base model**: Flux.1 D
**Trained words**: wet, covered in oil, covered in mud, wam, wet clothes, pouring oil, hair covered in oil, wetlook, oil, mud, holding oil bottle, see-through, partialy submerged, muddy hair, muddy clothes, soaking wet
## ๐ง Usage (Python)
๐ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:694850@777601", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
Muapi/jj-s-interior-space-kitchen
|
Muapi
| 2025-08-20T18:16:23Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-20T18:16:06Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# JJ's Interior Space - Kitchen

**Base model**: Flux.1 D
**Trained words**: kitchen
## ๐ง Usage (Python)
๐ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:335701@1285748", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
0xaoyama/blockassist-bc-muscular_zealous_gorilla_1755713650
|
0xaoyama
| 2025-08-20T18:14:45Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"muscular zealous gorilla",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T18:14:33Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- muscular zealous gorilla
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Muapi/wizard-s-grimdark-the-grit
|
Muapi
| 2025-08-20T18:14:41Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-20T18:14:25Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Wizard's Grimdark: The Grit

**Base model**: Flux.1 D
**Trained words**: gritty, poster art
## ๐ง Usage (Python)
๐ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:844992@945353", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
Muapi/mei-lee-red-panda-pony-illustrious-flux
|
Muapi
| 2025-08-20T18:12:54Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-20T18:12:24Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Mei Lee "Red panda" [Pony/Illustrious/Flux]

**Base model**: Flux.1 D
**Trained words**: MeiPandaIL
## ๐ง Usage (Python)
๐ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:701944@1883410", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
Muapi/kiwi-furry-style-flux
|
Muapi
| 2025-08-20T18:12:13Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-20T18:11:47Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# kiwi furry style FLUX

**Base model**: Flux.1 D
**Trained words**: furry, anime style
## ๐ง Usage (Python)
๐ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:765320@856009", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
helmutsukocok/blockassist-bc-loud_scavenging_kangaroo_1755711927
|
helmutsukocok
| 2025-08-20T18:11:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"loud scavenging kangaroo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T18:11:46Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- loud scavenging kangaroo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
thanobidex/blockassist-bc-colorful_shiny_hare_1755711822
|
thanobidex
| 2025-08-20T18:11:10Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"colorful shiny hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T18:11:06Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- colorful shiny hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ai-weekly/ai.undressing.app
|
ai-weekly
| 2025-08-20T18:10:09Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-20T17:41:29Z |
# AI Undress Best Undress APP in 2025 {p5z8l}
(Last Updated: 22 August, 2025)
**Undress AI โ The Most Advanced Clothing-Removal & Photo Reveal Tool (2025)**
โข **AI clothing removal & reveal** โ ultra-realistic skin rendering, soft shadows, and fine-textured simulations
โข **Try-on & swap modes** โ preview lingerie, sheer fabrics, or simulate full undressing
โข **Smart editing tools** โ tan line reconstruction, skin smoothing, lighting correction, detail sharpening
โข **Batch & single image support** โ quick rendering with uniform output
โข **Privacy-focused โข 18+ only โข Use responsibly with owned/consented images**
โฉโฉโฉ [**Try the Best Undress AI Now**](https://aiweely.com/tools)
Updated 22 August, 2025 โ As AI-driven image manipulation grows, so does the popularity of terms like *AI Undress*, *Undress AI*, and *AI cloth remover*. This guide offers a transparent, up-to-date breakdown of what these tools are, how they work, and whether theyโre safeโor legalโto use.
---
## What Is AI Undress? (Updated 22 August, 2025)
AI Undress software uses machine learning to digitally simulate the removal of clothing from photos. Also called *undress AI*, *undress image AI*, or *AI photo undress generator*, it employs advanced neural networksโoften derived from deepfake and generative AIโto fabricate nudity beneath clothing.
Originally made famous by apps like *DeepNude*, the technology has now evolved. By 2025, undress tools are more sophisticated and more ethically problematicโdespite growing attempts at regulation.
---
## A Brief History of AI Undressing Tools
### DeepNude and Early Development
* In 2019, *DeepNude* shook the web by auto-generating nude images from clothed female photos.
* It used trained neural networks to generate fake undressed versions, mimicking real skin and anatomy.
* After backlash and privacy outcry, DeepNude was taken offlineโbut its core model was cloned and shared.
### Rapid Evolution (2020โ2025)
* Inspired by DeepNudeโs viral attention, underground forums birthed hundreds of copycat tools.
* Newer models used image-to-image translation and style transfer, producing more believable resultsโand making them harder to detect.
---
## Real vs. Fake: What AI Can Do in 2025 (Updated 22 August, 2025)
### Can AI Really Remove Clothes?
As of August 2025, yesโ*to an extent*. AI undress tools create lifelike *fakes* by statistically guessing body shapes beneath garments.
* AI cannot โseeโ under clothes; it fabricates realistic layers based on visual patterns from massive datasets.
* These apps use GANs or diffusion models to merge generated skin with existing imagery.
* Output = fake render, not a real body.
### Weaknesses of AI Undressing
* Complex poses, baggy clothing, or mixed lighting degrade the results.
* Common artifacts: unnatural body proportions, messy shadows, and misaligned limbs.
* Best results come from high-res, frontal shots with tight clothing and good lighting.
### Are AI Undress Generators Legitimate?
* 90% of "undress image AI" tools online are scams, malware, or illegal clones.
* Trustworthy apps are rare, expensive, and pose privacy risks.
* Mobile app stores actively ban these, pushing most activity to the web or Telegram bots.
---
## Examples of AI Undress Tools (As of August 2025)
| Tool | Type | Claims | Reality | Legal Status |
| ----------------------- | ------------ | ------------------- | --------------------------------- | ------------------------ |
| DeepNude / DeepNudeNext | PC/Web | Clothing removal | Inactive, cloned illegally | Banned in most countries |
| Undress AI Pro | Web/App | See-through effects | Mostly scam or unsafe | Illegal in many regions |
| DeepArtUndress | Web/Bot | GAN-powered realism | Somewhat real, costly | Legally restricted |
| FaceMagic NSFW | Mobile | Nude generation | Fake, low realism | Suspicious/unsafe |
| Forum Mods & Reddit | Scripts/Mods | DeepNude clones | Malware-prone, rare working tools | Illegal content |
๐ซ Many "AI undress apps for Android/iOS" are removed quickly. Proceed with extreme caution.
---
## Legal, Ethical & Security Concerns (Updated 22 August, 2025)
### Legal Landscape
* By 2025, distributing or generating undressed images of real people without permission is *illegal* in the U.S., UK, EU, and beyond.
* Consent-based AI editing is legal, but AI-generated non-consensual imagery = image-based abuse.
### Ethical Red Flags
* Psychological harm: victims often suffer trauma and online harassment.
* Promotes a toxic digital culture of voyeurism and non-consent.
* Most undressed AI content targets womenโamplifying gendered abuse.
### Security & Scam Warnings
* Many AI undress websites demand payment after uploadโblackmail risk.
* Fake tools harvest your photos or install malware.
* Apps claiming to offer "AI clothes remover" often contain trojans or spyware.
---
## Legal AI Uses vs. Abusive Applications
### Legitimate AI Use Cases
1. **Medical Simulation** โ AI used to model human anatomy for training and diagnostics.
2. **Virtual Try-On Tools** โ Fashion apps showing how clothes look, not removing them.
3. **Forensics** โ AI attempts to reconstruct human form from damaged or partial data (rare).
### Abusive Applications
* Generating fake nude images of *real people without consent* = illegal deepfake use.
* Even sharing โfor funโ can carry criminal liability.
---
## How to Detect AI Undressed Photos
### Spotting Fakes in 2025
* **AI Artifacts** โ Blurry transitions, odd fingers, warping.
* **Watermarks or EXIF Tags** โ Many apps leave hidden traces.
* **Reverse Image Search** โ Find the original version of the photo.
* **AI Forensics Tools** โ Apps like Deepware Scanner can detect manipulation.
* **Ask for Consent** โ When in doubt, ask the person in the image.
---
## Safe, Legal Alternatives to Undress AI
### Creative AI Tools
* **Adobe Firefly, RunwayML** โ For artists working with body models.
* **MetaHuman Creator** โ Build virtual people for films/games.
### Fashion & Medical
* **Zeekit, Fashwell** โ For trying on clothes virtually (with avatars).
* **3D Medical Apps** โ For health and biology teaching.
๐ Avoid sketchy tools labeled โAI nude generatorโ or โsee-through clothes app.โ
---
## FAQ: Undress AI (2025 Edition)
**1. What is AI Undress?**
Itโs software that uses AI to simulate undressing someone in a photo.
**2. Are these tools real?**
Some work, but most are scams. None can truly see through clothing.
**3. Are AI undress tools legal?**
Only if used with clear, written consent. Most arenโt.
**4. Is there a safe DeepNude alternative?**
Not for real people. Explore avatar-based AI instead.
**5. How do they work?**
AI combines GAN/diffusion synthesis with the input photo to fabricate new imagery.
**6. Can AI actually see beneath clothes?**
Noโit *guesses* using patterns from training data.
**7. Is this illegal?**
Yes, if done without the subject's consent.
**8. How can I tell if an image was undressed by AI?**
Look for visual inconsistencies, metadata, or use detection tools.
**9. Is there any ethical use of this tech?**
Yes, in medical imaging, fashion AR, and avatar design.
**10. Can AI reveal whatโs actually under clothes?**
No. All results are simulationsโnot reality.
---
## Final Words: AI Undress in 2025
While AI undressing tools have improved dramatically in realism, the legal and ethical issues have only grown. Most online platforms offering โAI cloth removerโ services are misleading, dangerous, or outright illegal. We strongly urge users to choose creative, ethical AI pathsโwhether for digital fashion, art, or education.
๐ Stay safe. Stay ethical. Build a future with responsible AI.
(Updated 22 August, 2025)
|
Chedjoun/llama3-finetuned-promql
|
Chedjoun
| 2025-08-20T18:09:35Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T13:20:10Z |
---
base_model: unsloth/llama-3-8b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Chedjoun
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Orginal-Uppal-Farm-Girl-Viral-Videos-Links/New.full.videos.Uppal.Farm.Girl.Viral.Video.Official.Tutorial
|
Orginal-Uppal-Farm-Girl-Viral-Videos-Links
| 2025-08-20T18:09:17Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-20T18:03:30Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5xr5mb3e?leaked-videos/" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
+XNXX_VIDEOS-18+ }}!ยท Uppal Farm Girl
Viral Video Telegram Link Original Clip
|
lilTAT/blockassist-bc-gentle_rugged_hare_1755713309
|
lilTAT
| 2025-08-20T18:08:57Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gentle rugged hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T18:08:51Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gentle rugged hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
katanyasekolah/blockassist-bc-silky_sprightly_cassowary_1755711528
|
katanyasekolah
| 2025-08-20T18:08:57Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"silky sprightly cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T18:08:54Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- silky sprightly cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
burkerlee123/blockassist-bc-tall_roaring_moose_1755711485
|
burkerlee123
| 2025-08-20T18:08:51Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tall roaring moose",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T18:08:23Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tall roaring moose
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AnonymousCS/xlmr_immigration_combo21_4
|
AnonymousCS
| 2025-08-20T18:08:11Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-20T18:04:57Z |
---
library_name: transformers
license: mit
base_model: FacebookAI/xlm-roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: xlmr_immigration_combo21_4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmr_immigration_combo21_4
This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1502
- Accuracy: 0.9512
- 1-f1: 0.9264
- 1-recall: 0.9228
- 1-precision: 0.9300
- Balanced Acc: 0.9440
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:|
| 0.1087 | 1.0 | 25 | 0.1235 | 0.9602 | 0.9376 | 0.8996 | 0.9790 | 0.9450 |
| 0.106 | 2.0 | 50 | 0.1332 | 0.9614 | 0.9402 | 0.9112 | 0.9712 | 0.9489 |
| 0.1048 | 3.0 | 75 | 0.1502 | 0.9512 | 0.9264 | 0.9228 | 0.9300 | 0.9440 |
### Framework versions
- Transformers 4.56.0.dev0
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
|
bboppp/blockassist-bc-trotting_restless_squirrel_1755713111
|
bboppp
| 2025-08-20T18:05:18Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"trotting restless squirrel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T18:05:11Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- trotting restless squirrel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AnonymousCS/xlmr_immigration_combo21_3
|
AnonymousCS
| 2025-08-20T18:04:52Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-20T18:01:06Z |
---
library_name: transformers
license: mit
base_model: FacebookAI/xlm-roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: xlmr_immigration_combo21_3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmr_immigration_combo21_3
This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2167
- Accuracy: 0.9409
- 1-f1: 0.9105
- 1-recall: 0.9035
- 1-precision: 0.9176
- Balanced Acc: 0.9315
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:|
| 0.1453 | 1.0 | 25 | 0.1747 | 0.9357 | 0.9020 | 0.8880 | 0.9163 | 0.9238 |
| 0.2073 | 2.0 | 50 | 0.1645 | 0.9512 | 0.9224 | 0.8726 | 0.9784 | 0.9315 |
| 0.1407 | 3.0 | 75 | 0.1829 | 0.9486 | 0.9206 | 0.8958 | 0.9469 | 0.9354 |
| 0.0598 | 4.0 | 100 | 0.2167 | 0.9409 | 0.9105 | 0.9035 | 0.9176 | 0.9315 |
### Framework versions
- Transformers 4.56.0.dev0
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
|
lilTAT/blockassist-bc-gentle_rugged_hare_1755713013
|
lilTAT
| 2025-08-20T18:04:04Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gentle rugged hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T18:03:56Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gentle rugged hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AnonymousCS/xlmr_immigration_combo21_2
|
AnonymousCS
| 2025-08-20T18:01:01Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-20T17:57:21Z |
---
library_name: transformers
license: mit
base_model: FacebookAI/xlm-roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: xlmr_immigration_combo21_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmr_immigration_combo21_2
This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2842
- Accuracy: 0.9242
- 1-f1: 0.8850
- 1-recall: 0.8764
- 1-precision: 0.8937
- Balanced Acc: 0.9122
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:|
| 0.2109 | 1.0 | 25 | 0.2585 | 0.9075 | 0.8662 | 0.8996 | 0.8351 | 0.9055 |
| 0.1807 | 2.0 | 50 | 0.2331 | 0.9267 | 0.8889 | 0.8803 | 0.8976 | 0.9151 |
| 0.0668 | 3.0 | 75 | 0.2858 | 0.9165 | 0.8748 | 0.8764 | 0.8731 | 0.9064 |
| 0.1601 | 4.0 | 100 | 0.2842 | 0.9242 | 0.8850 | 0.8764 | 0.8937 | 0.9122 |
### Framework versions
- Transformers 4.56.0.dev0
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
|
Muapi/ink-watercolor-hybrid-style-from-legacy-doubao-for-flux-dev-noobai-and-illustrious
|
Muapi
| 2025-08-20T17:56:43Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-20T17:56:16Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Ink & Watercolor Hybrid Style from Legacy Doubao for Flux Dev, NoobAI and Illustrious

**Base model**: Flux.1 D
**Trained words**: An image in style of iwhyb
## ๐ง Usage (Python)
๐ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:1406430@1922670", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
Muapi/tifa-lockhart-final-fantasy-vii
|
Muapi
| 2025-08-20T17:55:57Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-20T17:55:48Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Tifa Lockhart - Final Fantasy VII

**Base model**: Flux.1 D
**Trained words**: TifaFFVII
## ๐ง Usage (Python)
๐ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:1163983@1309406", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
Muapi/little-red-riding-hood-xl-sd1.5-f1d
|
Muapi
| 2025-08-20T17:55:23Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-20T17:55:00Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Little Red Riding Hood XL + SD1.5 + F1D

**Base model**: Flux.1 D
**Trained words**: Little Red Riding Hood
## ๐ง Usage (Python)
๐ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:196138@1135762", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
mradermacher/granite-3.3-2b-finetuned-alpaca-hindi_full-GGUF
|
mradermacher
| 2025-08-20T17:53:58Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:Debk/granite-3.3-2b-finetuned-alpaca-hindi_full",
"base_model:quantized:Debk/granite-3.3-2b-finetuned-alpaca-hindi_full",
"endpoints_compatible",
"region:us"
] | null | 2025-08-20T17:34:06Z |
---
base_model: Debk/granite-3.3-2b-finetuned-alpaca-hindi_full
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/Debk/granite-3.3-2b-finetuned-alpaca-hindi_full
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#granite-3.3-2b-finetuned-alpaca-hindi_full-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/granite-3.3-2b-finetuned-alpaca-hindi_full-GGUF/resolve/main/granite-3.3-2b-finetuned-alpaca-hindi_full.Q2_K.gguf) | Q2_K | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/granite-3.3-2b-finetuned-alpaca-hindi_full-GGUF/resolve/main/granite-3.3-2b-finetuned-alpaca-hindi_full.Q3_K_S.gguf) | Q3_K_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/granite-3.3-2b-finetuned-alpaca-hindi_full-GGUF/resolve/main/granite-3.3-2b-finetuned-alpaca-hindi_full.Q3_K_M.gguf) | Q3_K_M | 1.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/granite-3.3-2b-finetuned-alpaca-hindi_full-GGUF/resolve/main/granite-3.3-2b-finetuned-alpaca-hindi_full.Q3_K_L.gguf) | Q3_K_L | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/granite-3.3-2b-finetuned-alpaca-hindi_full-GGUF/resolve/main/granite-3.3-2b-finetuned-alpaca-hindi_full.IQ4_XS.gguf) | IQ4_XS | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/granite-3.3-2b-finetuned-alpaca-hindi_full-GGUF/resolve/main/granite-3.3-2b-finetuned-alpaca-hindi_full.Q4_K_S.gguf) | Q4_K_S | 1.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/granite-3.3-2b-finetuned-alpaca-hindi_full-GGUF/resolve/main/granite-3.3-2b-finetuned-alpaca-hindi_full.Q4_K_M.gguf) | Q4_K_M | 1.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/granite-3.3-2b-finetuned-alpaca-hindi_full-GGUF/resolve/main/granite-3.3-2b-finetuned-alpaca-hindi_full.Q5_K_S.gguf) | Q5_K_S | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/granite-3.3-2b-finetuned-alpaca-hindi_full-GGUF/resolve/main/granite-3.3-2b-finetuned-alpaca-hindi_full.Q5_K_M.gguf) | Q5_K_M | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/granite-3.3-2b-finetuned-alpaca-hindi_full-GGUF/resolve/main/granite-3.3-2b-finetuned-alpaca-hindi_full.Q6_K.gguf) | Q6_K | 2.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/granite-3.3-2b-finetuned-alpaca-hindi_full-GGUF/resolve/main/granite-3.3-2b-finetuned-alpaca-hindi_full.Q8_0.gguf) | Q8_0 | 2.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/granite-3.3-2b-finetuned-alpaca-hindi_full-GGUF/resolve/main/granite-3.3-2b-finetuned-alpaca-hindi_full.f16.gguf) | f16 | 5.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/ImagineGPT-GGUF
|
mradermacher
| 2025-08-20T17:53:58Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"generated_from_trainer",
"trl",
"sft",
"en",
"base_model:d-s-b/ImagineGPT",
"base_model:quantized:d-s-b/ImagineGPT",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-20T17:46:04Z |
---
base_model: d-s-b/ImagineGPT
language:
- en
library_name: transformers
model_name: ImagineGPT
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- generated_from_trainer
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/d-s-b/ImagineGPT
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#ImagineGPT-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ImagineGPT-GGUF/resolve/main/ImagineGPT.Q3_K_S.gguf) | Q3_K_S | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/ImagineGPT-GGUF/resolve/main/ImagineGPT.Q2_K.gguf) | Q2_K | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/ImagineGPT-GGUF/resolve/main/ImagineGPT.IQ4_XS.gguf) | IQ4_XS | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/ImagineGPT-GGUF/resolve/main/ImagineGPT.Q3_K_M.gguf) | Q3_K_M | 0.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ImagineGPT-GGUF/resolve/main/ImagineGPT.Q3_K_L.gguf) | Q3_K_L | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/ImagineGPT-GGUF/resolve/main/ImagineGPT.Q4_K_S.gguf) | Q4_K_S | 0.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ImagineGPT-GGUF/resolve/main/ImagineGPT.Q4_K_M.gguf) | Q4_K_M | 0.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ImagineGPT-GGUF/resolve/main/ImagineGPT.Q5_K_S.gguf) | Q5_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/ImagineGPT-GGUF/resolve/main/ImagineGPT.Q5_K_M.gguf) | Q5_K_M | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/ImagineGPT-GGUF/resolve/main/ImagineGPT.Q6_K.gguf) | Q6_K | 0.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/ImagineGPT-GGUF/resolve/main/ImagineGPT.Q8_0.gguf) | Q8_0 | 0.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/ImagineGPT-GGUF/resolve/main/ImagineGPT.f16.gguf) | f16 | 0.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
youuotty/blockassist-bc-omnivorous_squeaky_bear_1755712391
|
youuotty
| 2025-08-20T17:53:45Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"omnivorous squeaky bear",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-20T17:53:13Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- omnivorous squeaky bear
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Muapi/gpt-styles-for-il-flux-shrekman-styles-mix
|
Muapi
| 2025-08-20T17:53:27Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-20T17:53:13Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# GPT Styles For IL&FLUX | Shrekman Styles Mix

**Base model**: Flux.1 D
**Trained words**: ancient Greek art, GPAMV1
## ๐ง Usage (Python)
๐ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:1612420@2049779", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.