modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-21 00:39:42
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 514
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-21 00:38:48
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
ngozimagen/ngozi-lora
|
ngozimagen
| 2025-08-19T17:46:53Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-19T16:59:52Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: ngozi
---
# Ngozi Lora
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `ngozi` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "ngozi",
"lora_weights": "https://huggingface.co/ngozimagen/ngozi-lora/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('ngozimagen/ngozi-lora', weight_name='lora.safetensors')
image = pipeline('ngozi').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/ngozimagen/ngozi-lora/discussions) to add images that show off what you’ve made with this LoRA.
|
arka7/Llama-3.2-3B-Instruct-bnb-4bit-rag-finetuned-with-DPO
|
arka7
| 2025-08-19T17:45:23Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T17:45:17Z |
---
base_model: unsloth/llama-3.2-3b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** arka7
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Timia123/simpo_inpo_iter2_aug19
|
Timia123
| 2025-08-19T17:43:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"alignment-handbook",
"inpo",
"generated_from_trainer",
"conversational",
"base_model:google/gemma-2-9b-it",
"base_model:finetune:google/gemma-2-9b-it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T17:40:20Z |
---
library_name: transformers
base_model: google/gemma-2-9b-it
tags:
- alignment-handbook
- inpo
- generated_from_trainer
model-index:
- name: gemma-2-9b-it_inpo_stage_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gemma-2-9b-it_inpo_stage_2
This model is a fine-tuned version of [google/gemma-2-9b-it](https://huggingface.co//home/hubing/SimPO/outputs/gemma-2-9b-it_inpo_stage_1/) on the data/inpo_iter2/pref dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-07
- train_batch_size: 2
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 16
- total_train_batch_size: 256
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.44.2
- Pytorch 2.2.2
- Datasets 2.14.6
- Tokenizers 0.19.1
|
VIDEOS-19-Afrin-Er-Viral-Video-Clip/New.full.videos.Afrin.Er.Viral.Video.Official.Tutorial
|
VIDEOS-19-Afrin-Er-Viral-Video-Clip
| 2025-08-19T17:43:12Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T17:42:55Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?leaked-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
vslinx/ComfyUIDetailerWorkflow-vslinx
|
vslinx
| 2025-08-19T17:42:51Z | 0 | 1 | null |
[
"region:us"
] | null | 2025-05-13T12:09:52Z |
# ComfyUI Detailer / ADetailer Workflow
## Requirements (Custom Nodes)
Requirements for each version are listed below or can be found inside a **Note** in the Workflow itself.
Because of the many connections among the nodes, I highly recommend turning off the link visibility by clicking the **"Toggle Link visibility"** (Eye icon) in the bottom right of ComfyUI.
## Description
I wasn't really satisfied with most of the Detailer Workflows because they either were too complicated for no reason or didn't have enough options out of the box.
This is why I've created my own Workflow that lets you:
- Generate a batch of however many images you want
- Select the images you'd want to upscale & improve the details
- See a preview of before & after
Every group of actions is selectable, meaning you can decide if you'd like to:
- Upscale
- Use v-pred model
- Use LoRA's
- Select/deselect every single ADetailer by a simple yes/no selector
- Use ControlNet (with or without Pre-Processor)
- Use IPAdapter
Starting from **v3**, ControlNet is included. <br>
Starting from **v4**, IPAdapter is included.
---
## Requirements
### v4
- [ComfyUI Impact Pack](https://github.com/ltdrdata/ComfyUI-Impact-Pack)
- [ComfyUI Impact Subpack](https://github.com/ltdrdata/ComfyUI-Impact-Subpack)
- [ComfyUI-mxToolkit](https://github.com/Smirnov75/ComfyUI-mxToolkit)
- [ComfyUI-Easy-Use](https://github.com/yolain/ComfyUI-Easy-Use)
- [ComfyUI-Custom-Scripts](https://github.com/pythongosssss/ComfyUI-Custom-Scripts)
- [ComfyUI-Crystools](https://github.com/crystian/ComfyUI-Crystools)
- [ComfyUI-Image-Saver](https://github.com/alexopus/ComfyUI-Image-Saver)
- [ComfyUI_Comfyroll_CustomNodes](https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes)
- [ComfyUI-Advanced-ControlNet](https://github.com/Kosinkadink/ComfyUI-Advanced-ControlNet)
- [ComfyUI-KJNodes](https://github.com/kijai/ComfyUI-KJNodes)
- [ComfyUI_IPAdapter_plus](https://github.com/cubiq/ComfyUI_IPAdapter_plus)
- [comfyui_controlnet_aux](https://github.com/Fannovel16/comfyui_controlnet_aux)
- [cg-use-everywhere](https://github.com/chrisgoringe/cg-use-everywhere)
- [cg-image-filter](https://github.com/chrisgoringe/cg-image-filter)
- [rgthree-comfy](https://github.com/rgthree/rgthree-comfy)
### v3-3.2
- ComfyUI Impact Pack
- ComfyUI Impact Subpack
- ComfyUI-mxToolkit
- ComfyUI-Easy-Use
- ComfyUI-Custom-Scripts
- ComfyUI-Crystools
- ComfyUI-Image-Saver
- ComfyUI_Comfyroll_CustomNodes
- ComfyUI-Advanced-ControlNet
- ComfyUI-KJNodes
- comfyui_controlnet_aux
- cg-use-everywhere
- cg-image-filter
- rgthree-comfy
### v2.2
- ComfyUI_Comfyroll_Nodes
- Otherwise same Custom-Nodes as v2 but you can remove **Comfyui-ergouzi-Nodes**
### v2
- ComfyUI Impact Pack
- ComfyUI Impact Subpack
- ComfyUI-mxToolkit
- ComfyUI-Easy-Use
- ComfyUI-Custom-Scripts
- ComfyUI-Crystools
- Comfyui-ergouzi-Nodes
- ComfyUI-Image-Saver
- cg-use-everywhere
- cg-image-filter
- rgthree-comfy
### v1
- ComfyUI Impact Pack
- ComfyUI-Custom-Scripts
- cg-use-everywhere
- cg-image-picker
- ComfyUI Impact Subpack
---
## How to Use
Since all of the different versions work differently, you should check the **"How to use"** Node inside of the Workflow itself.
I promise that once you read the explanation of the workflow itself, it'll click and it will be a simple plug and play experience.
It's the simplest I could've made it coming from someone who's only started using ComfyUI 4-5 months ago and had been exclusively an A1111WebUI user before.
---
## Missing ViT-B SAM Model?
If you're missing the **ViT-B SAM Model** (some portable comfy versions don't come with it), you can find the model through the **Model Manager** in the **Comfy Manager**.
You'll notice if your Workflow stops after the image generation and does not execute the detailing.
---
## Feedback
I'd love to see your feedback or opinion on the workflow.
This is the first workflow I have ever created myself from scratch and I'd love to hear what you think of it.
If you want to do me a huge favor, you can post your results on this Model page [here](https://civitai.com/models/1297813)
—I'll make sure to send some buzz your way!
|
lilTAT/blockassist-bc-gentle_rugged_hare_1755625264
|
lilTAT
| 2025-08-19T17:41:45Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gentle rugged hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T17:41:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gentle rugged hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
yookty/blockassist-bc-whistling_exotic_chicken_1755625296
|
yookty
| 2025-08-19T17:41:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"whistling exotic chicken",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T17:41:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- whistling exotic chicken
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Dharshaneshwaran/MultimodalDeepfakeDetector
|
Dharshaneshwaran
| 2025-08-19T17:41:22Z | 0 | 0 | null |
[
"arxiv:1604.02878",
"arxiv:2104.00298",
"arxiv:2008.06456",
"arxiv:1901.08971",
"region:us"
] | null | 2025-08-19T17:36:23Z |
# DeepSecure-AI
DeepSecure-AI is a powerful open-source tool designed to detect fake images, videos, and audios. Utilizing state-of-the-art deep learning techniques like EfficientNetV2 and MTCNN, DeepSecure-AI offers frame-by-frame video analysis, enabling high-accuracy deepfake detection. It's developed with a focus on ease of use, making it accessible for researchers, developers, and security analysts...
---
## Features
- Multimedia Detection: Detect deepfakes in images, videos, and audio files using a unified platform.
- High Accuracy: Leverages EfficientNetV2 for enhanced prediction performance and accurate results.
- Real-Time Video Analysis: Frame-by-frame analysis of videos with automatic face detection.
- User-Friendly Interface: Easy-to-use interface built with Gradio for uploading and processing media files.
- Open Source: Completely open source under the MIT license, making it available for developers to extend and improve.
---
## Demo-Data
You can test the deepfake detection capabilities of DeepSecure-AI by uploading your video files. The tool will analyze each frame of the video, detect faces, and determine the likelihood of the video being real or fake.
Examples:
1. [Video1-fake-1-ff.mp4](#)
2. [Video6-real-1-ff.mp4](#)
---
## How It Works
DeepSecure-AI uses the following architecture:
1. Face Detection:
The [MTCNN](https://arxiv.org/abs/1604.02878) model detects faces in each frame of the video. If no face is detected, it will use the previous frame's face to ensure accuracy.
2. Fake vs. Real Classification:
Once the face is detected, it's resized and fed into the [EfficientNetV2](https://arxiv.org/abs/2104.00298) deep learning model, which determines the likelihood of the frame being real or fake.
3. Fake Confidence:
A final prediction is generated as a percentage score, indicating the confidence that the media is fake.
4. Results:
DeepSecure-AI provides an output video, highlighting the detected faces and a summary of whether the input is classified as real or fake.
---
## Project Setup
### Prerequisites
Ensure you have the following installed:
- Python 3.10
- Gradio (pip install gradio)
- TensorFlow (pip install tensorflow)
- OpenCV (pip install opencv-python)
- PyTorch (pip install torch torchvision torchaudio)
- facenet-pytorch (pip install facenet-pytorch)
- MoviePy (pip install moviepy)
### Installation
1. Clone the repository:
cd DeepSecure-AI
2. Install required dependencies:
pip install -r requirements.txt
3. Download the pre-trained model weights for EfficientNetV2 and place them in the project folder.
### Running the Application
1. Launch the Gradio interface:
python app.py
2. The web interface will be available locally. You can upload a video, and DeepSecure-AI will analyze and display results.
---
## Example Usage
Upload a video or image to DeepSecure-AI to detect fake media. Here are some sample predictions:
- Video Analysis: The tool will detect faces from each frame and classify whether the video is fake or real.
- Result Output: A GIF or MP4 file with the sequence of detected faces and classification result will be provided.
---
## Technologies Used
- TensorFlow: For building and training deep learning models.
- EfficientNetV2: The core model for image and video classification.
- MTCNN: For face detection in images and videos.
- OpenCV: For video processing and frame manipulation.
- MoviePy: For video editing and result generation.
- Gradio: To create a user-friendly interface for interacting with the deepfake detector.
---
## License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
---
## Contributions
Contributions are welcome! If you'd like to improve the tool, feel free to submit a pull request or raise an issue.
For more information, check the [Contribution Guidelines](CONTRIBUTING.md).
---
## References
- Li et al. (2020): [Celeb-DF(V2)](https://arxiv.org/abs/2008.06456)
- Rossler et al. (2019): [FaceForensics++](https://arxiv.org/abs/1901.08971)
- Timesler (2020): [Facial Recognition Model in PyTorch](https://www.kaggle.com/timesler/facial-recognition-model-in-pytorch)
---
### Disclaimer
DeepSecure-AI is a research project and is designed for educational purposes.Please use responsibly and always give proper credit when utilizing the model in your work.
|
viraja1/banking
|
viraja1
| 2025-08-19T17:40:48Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/gemma-3-270m-it",
"base_model:finetune:unsloth/gemma-3-270m-it",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T17:40:17Z |
---
base_model: unsloth/gemma-3-270m-it
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** viraja1
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-270m-it
This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
kojeklollipop/blockassist-bc-spotted_amphibious_stork_1755623513
|
kojeklollipop
| 2025-08-19T17:40:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"spotted amphibious stork",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T17:40:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- spotted amphibious stork
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
catme0w/MolScribe-Long
|
catme0w
| 2025-08-19T17:37:36Z | 0 | 0 | null |
[
"base_model:yujieq/MolScribe",
"base_model:finetune:yujieq/MolScribe",
"license:mit",
"region:us"
] | null | 2025-08-19T04:44:52Z |
---
license: mit
base_model:
- yujieq/MolScribe
---
|
ihsanridzi/blockassist-bc-wiry_flexible_owl_1755623354
|
ihsanridzi
| 2025-08-19T17:36:03Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry flexible owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T17:35:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry flexible owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
zenqqq/Qwen3-0.6B-Gensyn-Swarm-slithering_darting_goat
|
zenqqq
| 2025-08-19T17:34:51Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am slithering_darting_goat",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T17:34:22Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am slithering_darting_goat
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
indoempatnol/blockassist-bc-fishy_wary_swan_1755623074
|
indoempatnol
| 2025-08-19T17:31:48Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fishy wary swan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T17:31:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fishy wary swan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
koloni/blockassist-bc-deadly_graceful_stingray_1755623133
|
koloni
| 2025-08-19T17:30:36Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly graceful stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T17:30:32Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly graceful stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
calegpedia/blockassist-bc-stealthy_slimy_rooster_1755622996
|
calegpedia
| 2025-08-19T17:30:15Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stealthy slimy rooster",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T17:30:11Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stealthy slimy rooster
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Vasya777/blockassist-bc-lumbering_enormous_sloth_1755624278
|
Vasya777
| 2025-08-19T17:25:23Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lumbering enormous sloth",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T17:25:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lumbering enormous sloth
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hakimjustbao/blockassist-bc-raging_subtle_wasp_1755622731
|
hakimjustbao
| 2025-08-19T17:24:40Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"raging subtle wasp",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T17:24:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- raging subtle wasp
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lisaozill03/blockassist-bc-rugged_prickly_alpaca_1755622770
|
lisaozill03
| 2025-08-19T17:24:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"rugged prickly alpaca",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T17:24:24Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- rugged prickly alpaca
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Pradeepgupta112233/runwayml-stable-diffusion-v1-5
|
Pradeepgupta112233
| 2025-08-19T17:24:15Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"region:us"
] |
text-to-image
| 2025-08-19T17:20:14Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- output:
url: images/8043.jpg
text: '-'
base_model: ''
instance_prompt: SD 1.5
---
# My Project – Character LoRA
<Gallery />
## Model description
my_project
## Trigger words
You should use `SD 1.5` to trigger the image generation.
## Download model
[Download](/Pradeepgupta112233/runwayml-stable-diffusion-v1-5/tree/main) them in the Files & versions tab.
|
AnonymousCS/xlmr_finnish_immigration3
|
AnonymousCS
| 2025-08-19T17:23:49Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-19T17:19:59Z |
---
library_name: transformers
license: mit
base_model: FacebookAI/xlm-roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: xlmr_finnish_immigration3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmr_finnish_immigration3
This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0700
- Accuracy: 0.9846
- 1-f1: 0.9767
- 1-recall: 0.9767
- 1-precision: 0.9767
- Balanced Acc: 0.9826
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:|
| 0.2027 | 1.0 | 5 | 0.0699 | 0.9846 | 0.9767 | 0.9767 | 0.9767 | 0.9826 |
| 0.1827 | 2.0 | 10 | 0.0772 | 0.9769 | 0.9655 | 0.9767 | 0.9545 | 0.9769 |
| 0.0918 | 3.0 | 15 | 0.0637 | 0.9846 | 0.9767 | 0.9767 | 0.9767 | 0.9826 |
| 0.067 | 4.0 | 20 | 0.0844 | 0.9692 | 0.9545 | 0.9767 | 0.9333 | 0.9711 |
| 0.0457 | 5.0 | 25 | 0.0700 | 0.9846 | 0.9767 | 0.9767 | 0.9767 | 0.9826 |
### Framework versions
- Transformers 4.56.0.dev0
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
|
AppliedLucent/nemo-phase5
|
AppliedLucent
| 2025-08-19T17:23:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:AppliedLucent/nemo-phase4",
"base_model:finetune:AppliedLucent/nemo-phase4",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T17:10:38Z |
---
base_model: AppliedLucent/nemo-phase4
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** AppliedLucent
- **License:** apache-2.0
- **Finetuned from model :** AppliedLucent/nemo-phase4
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Orginal-Bindura-University-viral-video-Cli/New.full.videos.Bindura.University.Viral.Video.Official.Tutorial
|
Orginal-Bindura-University-viral-video-Cli
| 2025-08-19T17:22:49Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T17:22:36Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?leaked-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
abdullahzahran/flan-t5-base-peft-dialogue-summary-abdUllahsamir
|
abdullahzahran
| 2025-08-19T17:21:30Z | 0 | 1 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:google/flan-t5-base",
"lora",
"transformers",
"base_model:google/flan-t5-base",
"license:apache-2.0",
"region:us"
] | null | 2025-08-19T14:39:02Z |
---
library_name: peft
license: apache-2.0
base_model: google/flan-t5-base
tags:
- base_model:adapter:google/flan-t5-base
- lora
- transformers
model-index:
- name: flan-t5-base-peft-dialogue-summary-abdUllahsamir
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-base-peft-dialogue-summary-abdUllahsamir
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1069
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.1103 | 1.0 | 6230 | 0.1152 |
| 0.1544 | 2.0 | 12460 | 0.1069 |
### Framework versions
- PEFT 0.17.0
- Transformers 4.55.2
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
vwzyrraz7l/blockassist-bc-tall_hunting_vulture_1755622458
|
vwzyrraz7l
| 2025-08-19T17:21:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tall hunting vulture",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T17:21:07Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tall hunting vulture
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Dejiat/blockassist-bc-savage_unseen_bobcat_1755623981
|
Dejiat
| 2025-08-19T17:20:18Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"savage unseen bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T17:20:04Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- savage unseen bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AnonymousCS/xlmr_english_immigration3
|
AnonymousCS
| 2025-08-19T17:19:07Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-19T17:16:06Z |
---
library_name: transformers
license: mit
base_model: FacebookAI/xlm-roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: xlmr_english_immigration3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmr_english_immigration3
This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1016
- Accuracy: 0.9692
- 1-f1: 0.9535
- 1-recall: 0.9535
- 1-precision: 0.9535
- Balanced Acc: 0.9652
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:|
| 0.1652 | 1.0 | 5 | 0.0668 | 0.9769 | 0.9655 | 0.9767 | 0.9545 | 0.9769 |
| 0.0409 | 2.0 | 10 | 0.0726 | 0.9769 | 0.9655 | 0.9767 | 0.9545 | 0.9769 |
| 0.0678 | 3.0 | 15 | 0.1016 | 0.9692 | 0.9535 | 0.9535 | 0.9535 | 0.9652 |
### Framework versions
- Transformers 4.56.0.dev0
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
|
chainway9/blockassist-bc-untamed_quick_eel_1755622218
|
chainway9
| 2025-08-19T17:17:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"untamed quick eel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T17:17:32Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- untamed quick eel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Jacksss123/net72_uid121
|
Jacksss123
| 2025-08-19T17:16:36Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-08-19T17:12:58Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
BootesVoid/cmei4v9jx0qkarts8bq6vrjku_cmeirrt8b0rterts8q66axwlu
|
BootesVoid
| 2025-08-19T17:16:29Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-19T17:16:27Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: LUNANOIRE
---
# Cmei4V9Jx0Qkarts8Bq6Vrjku_Cmeirrt8B0Rterts8Q66Axwlu
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `LUNANOIRE` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "LUNANOIRE",
"lora_weights": "https://huggingface.co/BootesVoid/cmei4v9jx0qkarts8bq6vrjku_cmeirrt8b0rterts8q66axwlu/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmei4v9jx0qkarts8bq6vrjku_cmeirrt8b0rterts8q66axwlu', weight_name='lora.safetensors')
image = pipeline('LUNANOIRE').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmei4v9jx0qkarts8bq6vrjku_cmeirrt8b0rterts8q66axwlu/discussions) to add images that show off what you’ve made with this LoRA.
|
smirki/UIGEN-X-4B-SFT-LoRA-128-lora
|
smirki
| 2025-08-19T17:14:46Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T17:14:34Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
achung20030/smolvla_all_dataset
|
achung20030
| 2025-08-19T17:14:23Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:lerobot/aloha_sim_transfer_cube_human_image",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-18T17:17:06Z |
---
base_model: lerobot/smolvla_base
datasets: lerobot/aloha_sim_transfer_cube_human_image
library_name: lerobot
license: apache-2.0
model_name: smolvla
pipeline_tag: robotics
tags:
- lerobot
- smolvla
- robotics
---
# Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
*Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`.*
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
* **License:** apache-2.0
|
heyaudio/lessons_v1
|
heyaudio
| 2025-08-19T17:14:11Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:mistralai/Ministral-8B-Instruct-2410",
"lora",
"transformers",
"text-generation",
"conversational",
"arxiv:1910.09700",
"base_model:mistralai/Ministral-8B-Instruct-2410",
"region:us"
] |
text-generation
| 2025-08-19T13:09:43Z |
---
base_model: mistralai/Ministral-8B-Instruct-2410
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:mistralai/Ministral-8B-Instruct-2410
- lora
- transformers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.0
|
AnonymousCS/xlmr_danish_immigration3
|
AnonymousCS
| 2025-08-19T17:09:38Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-19T17:06:38Z |
---
library_name: transformers
license: mit
base_model: FacebookAI/xlm-roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: xlmr_danish_immigration3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmr_danish_immigration3
This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2462
- Accuracy: 0.9077
- 1-f1: 0.8421
- 1-recall: 0.7442
- 1-precision: 0.9697
- Balanced Acc: 0.8663
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:|
| 0.2284 | 1.0 | 5 | 0.2331 | 0.9077 | 0.8421 | 0.7442 | 0.9697 | 0.8663 |
| 0.6095 | 2.0 | 10 | 0.2447 | 0.9154 | 0.8571 | 0.7674 | 0.9706 | 0.8780 |
| 0.2055 | 3.0 | 15 | 0.2462 | 0.9077 | 0.8421 | 0.7442 | 0.9697 | 0.8663 |
### Framework versions
- Transformers 4.56.0.dev0
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
|
gasoline2255/blockassist-bc-flightless_sizable_wildebeest_1755623220
|
gasoline2255
| 2025-08-19T17:09:29Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"flightless sizable wildebeest",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T17:09:17Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- flightless sizable wildebeest
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kojeklollipop/blockassist-bc-spotted_amphibious_stork_1755621443
|
kojeklollipop
| 2025-08-19T17:06:50Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"spotted amphibious stork",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T17:06:46Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- spotted amphibious stork
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
WenFengg/21_14l5_20_8
|
WenFengg
| 2025-08-19T17:06:27Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-19T16:57:21Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
Ver-full-videos-shirley-arica-Clips/Ver.Viral.video.shirley.arica.polemica.viral.en.twitter.y.telegram
|
Ver-full-videos-shirley-arica-Clips
| 2025-08-19T17:06:23Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T17:06:14Z |
[](https://tinyurl.com/bdk3zxvb)
|
chenqi1126/SpeechFlow_ckpts
|
chenqi1126
| 2025-08-19T17:05:48Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-19T15:03:28Z |
---
license: apache-2.0
---
|
spencer0051/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-jagged_endangered_pheasant
|
spencer0051
| 2025-08-19T17:05:40Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am jagged_endangered_pheasant",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-17T10:25:08Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am jagged_endangered_pheasant
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
espnet/geolid_vl107only_independent_trainable
|
espnet
| 2025-08-19T17:05:09Z | 0 | 0 |
espnet
|
[
"espnet",
"tensorboard",
"audio",
"language-identification",
"abk",
"afr",
"amh",
"ara",
"asm",
"aze",
"bak",
"bel",
"ben",
"bod",
"bos",
"bre",
"bul",
"cat",
"ceb",
"ces",
"cmn",
"cym",
"dan",
"deu",
"ell",
"eng",
"epo",
"est",
"eus",
"fao",
"fas",
"fin",
"fra",
"glg",
"glv",
"grn",
"guj",
"hat",
"hau",
"haw",
"heb",
"hin",
"hrv",
"hun",
"hye",
"ina",
"ind",
"isl",
"ita",
"jav",
"jpn",
"kan",
"kat",
"kaz",
"khm",
"kor",
"lao",
"lat",
"lav",
"lin",
"lit",
"ltz",
"mal",
"mar",
"mkd",
"mlg",
"mlt",
"mon",
"mri",
"msa",
"mya",
"nep",
"nld",
"nno",
"nor",
"oci",
"pan",
"pol",
"por",
"pus",
"ron",
"rus",
"san",
"sco",
"sin",
"slk",
"slv",
"sna",
"snd",
"som",
"spa",
"sqi",
"srp",
"sun",
"swa",
"swe",
"tam",
"tat",
"tel",
"tgk",
"tgl",
"tha",
"tuk",
"tur",
"ukr",
"urd",
"uzb",
"vie",
"war",
"yid",
"yor",
"dataset:geolid",
"arxiv:2005.07143",
"license:cc-by-4.0",
"region:us"
] | null | 2025-08-19T05:37:05Z |
---
tags:
- espnet
- audio
- language-identification
language:
- abk
- afr
- amh
- ara
- asm
- aze
- bak
- bel
- ben
- bod
- bos
- bre
- bul
- cat
- ceb
- ces
- cmn
- cym
- dan
- deu
- ell
- eng
- epo
- est
- eus
- fao
- fas
- fin
- fra
- glg
- glv
- grn
- guj
- hat
- hau
- haw
- heb
- hin
- hrv
- hun
- hye
- ina
- ind
- isl
- ita
- jav
- jpn
- kan
- kat
- kaz
- khm
- kor
- lao
- lat
- lav
- lin
- lit
- ltz
- mal
- mar
- mkd
- mlg
- mlt
- mon
- mri
- msa
- mya
- nep
- nld
- nno
- nor
- oci
- pan
- pol
- por
- pus
- ron
- rus
- san
- sco
- sin
- slk
- slv
- sna
- snd
- som
- spa
- sqi
- srp
- sun
- swa
- swe
- tam
- tat
- tel
- tgk
- tgl
- tha
- tuk
- tur
- ukr
- urd
- uzb
- vie
- war
- yid
- yor
datasets:
- geolid
license: cc-by-4.0
---
## ESPnet2 Spoken Language Identification (LID) model
### `espnet/geolid_vl107only_independent_trainable`
This geolocation-aware language identification (LID) model is developed using the [ESPnet](https://github.com/espnet/espnet/) toolkit. It integrates the powerful pretrained [MMS-1B](https://huggingface.co/facebook/mms-1b) as the encoder and employs [ECAPA-TDNN](https://arxiv.org/pdf/2005.07143) as the embedding extractor to achieve robust spoken language identification.
The main innovations of this model are:
1. Incorporating geolocation prediction as an auxiliary task during training.
2. Conditioning the intermediate representations of the self-supervised learning (SSL) encoder on intermediate-layer information.
This geolocation-aware strategy greatly improves robustness, especially for dialects and accented variations.
For further details on the geolocation-aware LID methodology, please refer to our paper: *Geolocation-Aware Robust Spoken Language Identification* (arXiv link to be added).
### Usage Guide: How to use in ESPnet2
#### Prerequisites
First, ensure you have ESPnet installed. If not, follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html).
#### Quick Start
Run the following commands to set up and use the pre-trained model:
```bash
cd espnet
pip install -e .
cd egs2/geolid/lid1
# Download the exp_combined to egs2/geolid/lid1
hf download espnet/geolid_vl107only_independent_trainable --local-dir . --exclude "README.md" "meta.yaml" ".gitattributes"
./run_voxlingua107_only.sh --skip_data_prep false --skip_train true --lid_config conf/voxlingua107_only/mms_ecapa_upcon_32_44_it0.4_independent_trainable.yaml
```
This will download the pre-trained model and run inference using the VoxLingua107 test data.
### Train and Evaluation Datasets
The training used only the VoxLingua107 dataset, comprising 6,628 hours of speech across 107 languages from YouTube.
| Dataset | Domain | #Langs. Train/Test | Dialect | Training Setup (VL107-only) |
| ------------- | ----------- | ------------------ | ------- | --------------------------- |
| [VoxLingua107](https://cs.taltech.ee/staff/tanel.alumae/data/voxlingua107/) | YouTube | 107/33 | No | Seen |
| [Babel](https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=31a13cefb42647e924e0d2778d341decc44c40e9) | Telephone | 25/25 | No | Unseen |
| [FLEURS](https://huggingface.co/datasets/google/xtreme_s) | Read speech | 102/102 | No | Unseen |
| [ML-SUPERB 2.0](https://huggingface.co/datasets/espnet/ml_superb_hf) | Mixed | 137/(137, 8) | Yes | Unseen |
| [VoxPopuli](https://huggingface.co/datasets/facebook/voxpopuli) | Parliament | 16/16 | No | Unseen |
### Results
**Accuracy (%) on In-domain and Out-of-domain Test Sets**
<style>
.hf-model-cell {
max-width: 120px;
overflow-x: auto;
white-space: nowrap;
scrollbar-width: thin;
scrollbar-color: #888 #f1f1f1;
}
.config-cell {
max-width: 100px;
overflow-x: auto;
white-space: nowrap;
scrollbar-width: thin;
scrollbar-color: #888 #f1f1f1;
}
.hf-model-cell::-webkit-scrollbar,
.config-cell::-webkit-scrollbar {
height: 6px;
}
.hf-model-cell::-webkit-scrollbar-track,
.config-cell::-webkit-scrollbar-track {
background: #f1f1f1;
border-radius: 3px;
}
.hf-model-cell::-webkit-scrollbar-thumb,
.config-cell::-webkit-scrollbar-thumb {
background: #888;
border-radius: 3px;
}
.hf-model-cell::-webkit-scrollbar-thumb:hover,
.config-cell::-webkit-scrollbar-thumb:hover {
background: #555;
}
</style>
<div style="overflow-x: auto;">
| ESPnet Recipe | Config | VoxLingua107 | Babel | FLEURS | ML-SUPERB2.0 Dev | ML-SUPERB2.0 Dialect | VoxPopuli | Macro Avg. |
| ------------------------- | ----------- | ------------ | ----- | ------ | ---------------- | -------------------- | --------- | ---------- |
| <div class="hf-model-cell">[egs2/geolid/lid1](https://github.com/espnet/espnet/tree/master/egs2/geolid/lid1)</div> | <div class="config-cell">`conf/voxlingua107_only/mms_ecapa_upcon_32_44_it0.4_independent_trainable.yaml`</div> | 93.7 | 85.3 | 93.7 | 88.3 | 70.3 | 86.5 | 86.3 |
</div>
For more detailed inference results, please refer to the `exp_voxlingua107_only/lid_mms_ecapa_upcon_32_44_it0.4_independent_trainable_raw/inference` directory in this repository.
> **Note (2025-08-18):**
> The corresponding GitHub recipe [egs2/geolid/lid1](https://github.com/espnet/espnet/tree/master/egs2/geolid/lid1) has not yet been merged into the ESPnet master branch.
> See TODO: add PR link for the latest updates.
## LID config
<details><summary>expand</summary>
```
config: conf/voxlingua107_only/mms_ecapa_upcon_32_44_it0.4_independent_trainable.yaml
print_config: false
log_level: INFO
drop_last_iter: false
dry_run: false
iterator_type: category
valid_iterator_type: category
output_dir: exp_voxlingua107_only/lid_mms_ecapa_upcon_32_44_it0.4_independent_trainable_raw
ngpu: 1
seed: 3702
num_workers: 8
num_att_plot: 0
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: true
sharded_ddp: false
use_deepspeed: false
deepspeed_config: null
gradient_as_bucket_view: true
ddp_comm_hook: null
cudnn_enabled: true
cudnn_benchmark: true
cudnn_deterministic: false
use_tf32: false
collect_stats: false
write_collected_feats: false
max_epoch: 30
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- accuracy
- max
keep_nbest_models: 2
nbest_averaging_interval: 0
grad_clip: 9999
grad_clip_type: 2.0
grad_noise: false
accum_grad: 2
no_forward_run: false
resume: true
train_dtype: float32
use_amp: true
log_interval: 100
use_matplotlib: true
use_tensorboard: true
create_graph_in_tensorboard: false
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
use_adapter: false
adapter: lora
save_strategy: all
adapter_conf: {}
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: 1000
batch_size: 20
valid_batch_size: null
batch_bins: 2880000
valid_batch_bins: null
category_sample_size: 10
upsampling_factor: 0.5
category_upsampling_factor: 0.5
dataset_upsampling_factor: 0.5
dataset_scaling_factor: 1.2
max_batch_size: 16
min_batch_size: 1
train_shape_file:
- exp_voxlingua107_only/lid_stats_16k/train/speech_shape
valid_shape_file:
- exp_voxlingua107_only/lid_stats_16k/valid/speech_shape
batch_type: catpow
language_upsampling_factor: 0.5
valid_batch_type: null
fold_length:
- 120000
sort_in_batch: descending
shuffle_within_batch: false
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
chunk_excluded_key_prefixes: []
chunk_default_fs: null
chunk_max_abs_length: null
chunk_discard_short_samples: true
train_data_path_and_name_and_type:
- - dump/raw/train_voxlingua107_lang/wav.scp
- speech
- sound
- - dump/raw/train_voxlingua107_lang/utt2lang
- lid_labels
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev_voxlingua107_lang/wav.scp
- speech
- sound
- - dump/raw/dev_voxlingua107_lang/utt2lang
- lid_labels
- text
multi_task_dataset: false
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
allow_multi_rates: false
valid_max_cache_size: null
exclude_weight_decay: false
exclude_weight_decay_conf: {}
optim: adam
optim_conf:
lr: 5.0e-06
betas:
- 0.9
- 0.98
scheduler: tristagelr
scheduler_conf:
max_steps: 30000
warmup_ratio: 0.3
hold_ratio: 0.2
decay_ratio: 0.5
init_lr_scale: 0.6
final_lr_scale: 0.1
init: null
use_preprocessor: true
input_size: null
target_duration: 3.0
lang2utt: dump/raw/train_voxlingua107_lang/lang2utt
lang_num: 107
sample_rate: 16000
num_eval: 10
rir_scp: ''
model: upstream_condition
model_conf:
lang2vec_conditioning_layers:
- 32
- 36
- 40
- 44
apply_intermediate_lang2vec_loss: true
apply_intermediate_lang2vec_condition: true
inter_lang2vec_loss_weight: 0.4
cutoff_gradient_from_backbone: false
cutoff_gradient_before_condproj: true
shared_conditioning_proj: false
frontend: s3prl_condition
frontend_conf:
frontend_conf:
upstream: hf_wav2vec2_condition
path_or_url: facebook/mms-1b
download_dir: ./hub
multilayer_feature: true
specaug: null
specaug_conf: {}
normalize: utterance_mvn
normalize_conf:
norm_vars: false
encoder: ecapa_tdnn
encoder_conf:
model_scale: 8
ndim: 512
output_size: 1536
pooling: chn_attn_stat
pooling_conf: {}
projector: rawnet3
projector_conf:
output_size: 192
encoder_condition: identity
encoder_condition_conf: {}
pooling_condition: chn_attn_stat
pooling_condition_conf: {}
projector_condition: rawnet3
projector_condition_conf: {}
preprocessor: lid
preprocessor_conf:
fix_duration: false
sample_rate: 16000
noise_apply_prob: 0.0
noise_info:
- - 1.0
- dump/raw/musan_speech.scp
- - 4
- 7
- - 13
- 20
- - 1.0
- dump/raw/musan_noise.scp
- - 1
- 1
- - 0
- 15
- - 1.0
- dump/raw/musan_music.scp
- - 1
- 1
- - 5
- 15
rir_apply_prob: 0.0
rir_scp: dump/raw/rirs.scp
use_lang2vec: true
lang2vec_type: geo
loss: aamsoftmax_sc_topk_lang2vec
loss_conf:
margin: 0.5
scale: 30
K: 3
mp: 0.06
k_top: 5
lang2vec_dim: 299
lang2vec_type: geo
lang2vec_weight: 0.2
required:
- output_dir
version: '202506'
distributed: false
```
</details>
### Citation
```BibTex
@inproceedings{wang2025geolid,
author={Qingzheng Wang, Hye-jin Shim, Jiancheng Sun, and Shinji Watanabe},
title={Geolocation-Aware Robust Spoken Language Identification},
year={2025},
booktitle={Procedings of ASRU},
}
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
|
Thatphum/got-ocr-2-0-fixed
|
Thatphum
| 2025-08-19T17:05:05Z | 198 | 0 |
transformers
|
[
"transformers",
"safetensors",
"got_ocr2",
"image-text-to-text",
"got",
"vision-language",
"ocr2.0",
"multilingual",
"arxiv:2409.01704",
"arxiv:2405.14295",
"arxiv:2312.06109",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-07-30T04:16:32Z |
---
pipeline_tag: image-text-to-text
library_name: transformers
language:
- multilingual
tags:
- got
- vision-language
- ocr2.0
license: apache-2.0
---
<h1>General OCR Theory: Towards OCR-2.0 via a Unified End-to-end Model - HF Transformers 🤗 implementation
</h1>
[🤗 Spaces Demo](https://huggingface.co/spaces/yonigozlan/GOT-OCR-Transformers) | [🌟GitHub](https://github.com/Ucas-HaoranWei/GOT-OCR2.0/) | [📜Paper](https://arxiv.org/abs/2409.01704)</a>
[Haoran Wei*](https://scholar.google.com/citations?user=J4naK0MAAAAJ&hl=en), Chenglong Liu*, Jinyue Chen, Jia Wang, Lingyu Kong, Yanming Xu, [Zheng Ge](https://joker316701882.github.io/), Liang Zhao, [Jianjian Sun](https://scholar.google.com/citations?user=MVZrGkYAAAAJ&hl=en), [Yuang Peng](https://scholar.google.com.hk/citations?user=J0ko04IAAAAJ&hl=zh-CN&oi=ao), Chunrui Han, [Xiangyu Zhang](https://scholar.google.com/citations?user=yuB-cfoAAAAJ&hl=en)

Tips:
GOT-OCR2 works on a wide range of tasks, including plain document OCR, scene text OCR, formatted document OCR, and even OCR for tables, charts, mathematical formulas, geometric shapes, molecular formulas and sheet music. While this implementation of the model will only output plain text, the outputs can be further processed to render the desired format, with packages like `pdftex`, `mathpix`, `matplotlib`, `tikz`, `verovio` or `pyecharts`.
The model can also be used for interactive OCR, where the user can specify the region to be recognized by providing the coordinates or the color of the region's bounding box.
This model was contributed by [yonigozlan](https://huggingface.co/yonigozlan).
The original code can be found [here](https://github.com/Ucas-HaoranWei/GOT-OCR2.0).
## Usage example
### Plain text inference
```python
>>> from transformers import AutoProcessor, AutoModelForImageTextToText
>>> device = "cuda" if torch.cuda.is_available() else "cpu"
>>> model = AutoModelForImageTextToText.from_pretrained("stepfun-ai/GOT-OCR-2.0-hf", device_map=device)
>>> processor = AutoProcessor.from_pretrained("stepfun-ai/GOT-OCR-2.0-hf")
>>> image = "https://huggingface.co/datasets/hf-internal-testing/fixtures_got_ocr/resolve/main/image_ocr.jpg"
>>> inputs = processor(image, return_tensors="pt").to(device)
>>> generate_ids = model.generate(
... **inputs,
... do_sample=False,
... tokenizer=processor.tokenizer,
... stop_strings="<|im_end|>",
... max_new_tokens=4096,
... )
>>> processor.decode(generate_ids[0, inputs["input_ids"].shape[1]:], skip_special_tokens=True)
"R&D QUALITY IMPROVEMENT\nSUGGESTION/SOLUTION FORM\nName/Phone Ext. : (...)"
```
### Plain text inference batched
```python
>>> from transformers import AutoProcessor, AutoModelForImageTextToText
>>> device = "cuda" if torch.cuda.is_available() else "cpu"
>>> model = AutoModelForImageTextToText.from_pretrained("stepfun-ai/GOT-OCR-2.0-hf", device_map=device)
>>> processor = AutoProcessor.from_pretrained("stepfun-ai/GOT-OCR-2.0-hf")
>>> image1 = "https://huggingface.co/datasets/hf-internal-testing/fixtures_got_ocr/resolve/main/multi_box.png"
>>> image2 = "https://huggingface.co/datasets/hf-internal-testing/fixtures_got_ocr/resolve/main/image_ocr.jpg"
>>> inputs = processor([image1, image2], return_tensors="pt").to(device)
>>> generate_ids = model.generate(
... **inputs,
... do_sample=False,
... tokenizer=processor.tokenizer,
... stop_strings="<|im_end|>",
... max_new_tokens=4,
... )
>>> processor.batch_decode(generate_ids[:, inputs["input_ids"].shape[1] :], skip_special_tokens=True)
["Reducing the number", "R&D QUALITY"]
```
### Formatted text inference
GOT-OCR2 can also generate formatted text, such as markdown or LaTeX. Here is an example of how to generate formatted text:
```python
>>> from transformers import AutoProcessor, AutoModelForImageTextToText
>>> device = "cuda" if torch.cuda.is_available() else "cpu"
>>> model = AutoModelForImageTextToText.from_pretrained("stepfun-ai/GOT-OCR-2.0-hf", device_map=device)
>>> processor = AutoProcessor.from_pretrained("stepfun-ai/GOT-OCR-2.0-hf")
>>> image = "https://huggingface.co/datasets/hf-internal-testing/fixtures_got_ocr/resolve/main/latex.png"
>>> inputs = processor(image, return_tensors="pt", format=True).to(device)
>>> generate_ids = model.generate(
... **inputs,
... do_sample=False,
... tokenizer=processor.tokenizer,
... stop_strings="<|im_end|>",
... max_new_tokens=4096,
... )
>>> processor.decode(generate_ids[0, inputs["input_ids"].shape[1]:], skip_special_tokens=True)
"\\author{\nHanwen Jiang* \\(\\quad\\) Arjun Karpur \\({ }^{\\dagger} \\quad\\) Bingyi Cao \\({ }^{\\dagger} \\quad\\) (...)"
```
### Inference on multiple pages
Although it might be reasonable in most cases to use a “for loop” for multi-page processing, some text data with formatting across several pages make it necessary to process all pages at once. GOT introduces a multi-page OCR (without “for loop”) feature, where multiple pages can be processed by the model at once, whith the output being one continuous text.
Here is an example of how to process multiple pages at once:
```python
>>> from transformers import AutoProcessor, AutoModelForImageTextToText
>>> device = "cuda" if torch.cuda.is_available() else "cpu"
>>> model = AutoModelForImageTextToText.from_pretrained("stepfun-ai/GOT-OCR-2.0-hf", device_map=device)
>>> processor = AutoProcessor.from_pretrained("stepfun-ai/GOT-OCR-2.0-hf")
>>> image1 = "https://huggingface.co/datasets/hf-internal-testing/fixtures_got_ocr/resolve/main/page1.png"
>>> image2 = "https://huggingface.co/datasets/hf-internal-testing/fixtures_got_ocr/resolve/main/page2.png"
>>> inputs = processor([image1, image2], return_tensors="pt", multi_page=True, format=True).to(device)
>>> generate_ids = model.generate(
... **inputs,
... do_sample=False,
... tokenizer=processor.tokenizer,
... stop_strings="<|im_end|>",
... max_new_tokens=4096,
... )
>>> processor.decode(generate_ids[0, inputs["input_ids"].shape[1]:], skip_special_tokens=True)
"\\title{\nGeneral OCR Theory: Towards OCR-2.0 via a Unified End-to-end Model\n}\n\\author{\nHaoran Wei (...)"
```
### Inference on cropped patches
GOT supports a 1024×1024 input resolution, which is sufficient for most OCR tasks, such as scene OCR or processing A4-sized PDF pages. However, certain scenarios, like horizontally stitched two-page PDFs commonly found in academic papers or images with unusual aspect ratios, can lead to accuracy issues when processed as a single image. To address this, GOT can dynamically crop an image into patches, process them all at once, and merge the results for better accuracy with such inputs.
Here is an example of how to process cropped patches:
```python
>>> import torch
>>> from transformers import AutoProcessor, AutoModelForImageTextToText
>>> device = "cuda" if torch.cuda.is_available() else "cpu"
>>> model = AutoModelForImageTextToText.from_pretrained("stepfun-ai/GOT-OCR-2.0-hf", torch_dtype=torch.bfloat16, device_map=device)
>>> processor = AutoProcessor.from_pretrained("stepfun-ai/GOT-OCR-2.0-hf")
>>> image = "https://huggingface.co/datasets/hf-internal-testing/fixtures_got_ocr/resolve/main/one_column.png"
>>> inputs = processor(image, return_tensors="pt", format=True, crop_to_patches=True, max_patches=3).to(device)
>>> generate_ids = model.generate(
... **inputs,
... do_sample=False,
... tokenizer=processor.tokenizer,
... stop_strings="<|im_end|>",
... max_new_tokens=4096,
... )
>>> processor.decode(generate_ids[0, inputs["input_ids"].shape[1]:], skip_special_tokens=True)
"on developing architectural improvements to make learnable matching methods generalize.\nMotivated by the above observations, (...)"
```
### Inference on a specific region
GOT supports interactive OCR, where the user can specify the region to be recognized by providing the coordinates or the color of the region's bounding box. Here is an example of how to process a specific region:
```python
>>> from transformers import AutoProcessor, AutoModelForImageTextToText
>>> device = "cuda" if torch.cuda.is_available() else "cpu"
>>> model = AutoModelForImageTextToText.from_pretrained("stepfun-ai/GOT-OCR-2.0-hf", device_map=device)
>>> processor = AutoProcessor.from_pretrained("stepfun-ai/GOT-OCR-2.0-hf")
>>> image = "https://huggingface.co/datasets/hf-internal-testing/fixtures_got_ocr/resolve/main/multi_box.png"
>>> inputs = processor(image, return_tensors="pt", color="green").to(device) # or box=[x1, y1, x2, y2] for coordinates (image pixels)
>>> generate_ids = model.generate(
... **inputs,
... do_sample=False,
... tokenizer=processor.tokenizer,
... stop_strings="<|im_end|>",
... max_new_tokens=4096,
... )
>>> processor.decode(generate_ids[0, inputs["input_ids"].shape[1]:], skip_special_tokens=True)
"You should keep in mind what features from the module should be used, especially \nwhen you’re planning to sell a template."
```
### Inference on general OCR data example: sheet music
Although this implementation of the model will only output plain text, the outputs can be further processed to render the desired format, with packages like `pdftex`, `mathpix`, `matplotlib`, `tikz`, `verovio` or `pyecharts`.
Here is an example of how to process sheet music:
```python
>>> from transformers import AutoProcessor, AutoModelForImageTextToText
>>> import verovio
>>> device = "cuda" if torch.cuda.is_available() else "cpu"
>>> model = AutoModelForImageTextToText.from_pretrained("stepfun-ai/GOT-OCR-2.0-hf", device_map=device)
>>> processor = AutoProcessor.from_pretrained("stepfun-ai/GOT-OCR-2.0-hf")
>>> image = "https://huggingface.co/datasets/hf-internal-testing/fixtures_got_ocr/resolve/main/sheet_music.png"
>>> inputs = processor(image, return_tensors="pt", format=True).to(device)
>>> generate_ids = model.generate(
... **inputs,
... do_sample=False,
... tokenizer=processor.tokenizer,
... stop_strings="<|im_end|>",
... max_new_tokens=4096,
... )
>>> outputs = processor.decode(generate_ids[0, inputs["input_ids"].shape[1]:], skip_special_tokens=True)
>>> tk = verovio.toolkit()
>>> tk.loadData(outputs)
>>> tk.setOptions(
... {
... "pageWidth": 2100,
... "pageHeight": 800,
... "footer": "none",
... "barLineWidth": 0.5,
... "beamMaxSlope": 15,
... "staffLineWidth": 0.2,
... "spacingStaff": 6,
... }
... )
>>> tk.getPageCount()
>>> svg = tk.renderToSVG()
>>> svg = svg.replace('overflow="inherit"', 'overflow="visible"')
>>> with open("output.svg", "w") as f:
>>> f.write(svg)
```
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/sheet_music.svg"
alt="drawing" width="600"/>
## Citation
If you find our work helpful, please consider citing our papers 📝 and liking this project ❤️!
```bib
@article{wei2024general,
title={General OCR Theory: Towards OCR-2.0 via a Unified End-to-end Model},
author={Wei, Haoran and Liu, Chenglong and Chen, Jinyue and Wang, Jia and Kong, Lingyu and Xu, Yanming and Ge, Zheng and Zhao, Liang and Sun, Jianjian and Peng, Yuang and others},
journal={arXiv preprint arXiv:2409.01704},
year={2024}
}
@article{liu2024focus,
title={Focus Anywhere for Fine-grained Multi-page Document Understanding},
author={Liu, Chenglong and Wei, Haoran and Chen, Jinyue and Kong, Lingyu and Ge, Zheng and Zhu, Zining and Zhao, Liang and Sun, Jianjian and Han, Chunrui and Zhang, Xiangyu},
journal={arXiv preprint arXiv:2405.14295},
year={2024}
}
@article{wei2023vary,
title={Vary: Scaling up the Vision Vocabulary for Large Vision-Language Models},
author={Wei, Haoran and Kong, Lingyu and Chen, Jinyue and Zhao, Liang and Ge, Zheng and Yang, Jinrong and Sun, Jianjian and Han, Chunrui and Zhang, Xiangyu},
journal={arXiv preprint arXiv:2312.06109},
year={2023}
}
```
|
EZCon/Qwen2.5-VL-7B-Instruct-4bit-mlx
|
EZCon
| 2025-08-19T17:05:03Z | 39 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-to-text",
"multimodal",
"unsloth",
"mlx",
"image-text-to-text",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-VL-7B-Instruct",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"region:us"
] |
image-text-to-text
| 2025-08-05T07:17:26Z |
---
base_model:
- Qwen/Qwen2.5-VL-7B-Instruct
license: apache-2.0
language:
- en
pipeline_tag: image-text-to-text
tags:
- multimodal
- unsloth
- mlx
library_name: transformers
---
# EZCon/Qwen2.5-VL-7B-Instruct-4bit-mlx
This model was converted to MLX format from [`unsloth/Qwen2.5-VL-7B-Instruct`]() using mlx-vlm version **0.3.2**.
Refer to the [original model card](https://huggingface.co/unsloth/Qwen2.5-VL-7B-Instruct) for more details on the model.
## Use with mlx
```bash
pip install -U mlx-vlm
```
```bash
python -m mlx_vlm.generate --model EZCon/Qwen2.5-VL-7B-Instruct-4bit-mlx --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image>
```
|
EZCon/Qwen2.5-VL-3B-Instruct-8bit-mlx
|
EZCon
| 2025-08-19T17:04:24Z | 16 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-to-text",
"multimodal",
"unsloth",
"mlx",
"image-text-to-text",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-VL-3B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-VL-3B-Instruct",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"region:us"
] |
image-text-to-text
| 2025-04-18T03:49:01Z |
---
base_model:
- Qwen/Qwen2.5-VL-3B-Instruct
license_name: qwen-research
license_link: https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct/blob/main/LICENSE
language:
- en
pipeline_tag: image-text-to-text
tags:
- multimodal
- unsloth
- mlx
library_name: transformers
---
# EZCon/Qwen2.5-VL-3B-Instruct-8bit-mlx
This model was converted to MLX format from [`unsloth/Qwen2.5-VL-3B-Instruct`]() using mlx-vlm version **0.3.2**.
Refer to the [original model card](https://huggingface.co/unsloth/Qwen2.5-VL-3B-Instruct) for more details on the model.
## Use with mlx
```bash
pip install -U mlx-vlm
```
```bash
python -m mlx_vlm.generate --model EZCon/Qwen2.5-VL-3B-Instruct-8bit-mlx --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image>
```
|
EZCon/Qwen2.5-VL-3B-Instruct-mlx
|
EZCon
| 2025-08-19T17:03:32Z | 18 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-to-text",
"multimodal",
"unsloth",
"mlx",
"image-text-to-text",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-VL-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-3B-Instruct",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-08-05T07:02:34Z |
---
base_model:
- Qwen/Qwen2.5-VL-3B-Instruct
license_name: qwen-research
license_link: https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct/blob/main/LICENSE
language:
- en
pipeline_tag: image-text-to-text
tags:
- multimodal
- unsloth
- mlx
library_name: transformers
---
# EZCon/Qwen2.5-VL-3B-Instruct-mlx
This model was converted to MLX format from [`unsloth/Qwen2.5-VL-3B-Instruct`]() using mlx-vlm version **0.3.2**.
Refer to the [original model card](https://huggingface.co/unsloth/Qwen2.5-VL-3B-Instruct) for more details on the model.
## Use with mlx
```bash
pip install -U mlx-vlm
```
```bash
python -m mlx_vlm.generate --model EZCon/Qwen2.5-VL-3B-Instruct-mlx --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image>
```
|
Dejiat/blockassist-bc-savage_unseen_bobcat_1755622935
|
Dejiat
| 2025-08-19T17:03:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"savage unseen bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T17:02:46Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- savage unseen bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
llm-jp/optimal-sparsity-math-d512-E16-k2-520M-A170M
|
llm-jp
| 2025-08-19T17:02:31Z | 0 | 0 | null |
[
"safetensors",
"mixtral",
"region:us"
] | null | 2025-08-19T16:53:35Z |
## How to cite
If you find our work helpful, please feel free to cite the paper.
```
@inproceedings{
nakamura2025optimal,
title={Optimal Sparsity of Mixture-of-Experts Language Models for Reasoning Tasks},
author={Taishi Nakamura and Satoki Ishikawa and Masaki Kawamura and Takumi Okamoto and Daisuke Nohara and Jun Suzuki and Rio Yokota},
booktitle={2nd AI for Math Workshop @ ICML 2025},
year={2025},
url={https://openreview.net/forum?id=Ewj06opLqW}
}
```
|
EZCon/Qwen2-VL-2B-Instruct-4bit-mlx
|
EZCon
| 2025-08-19T17:02:17Z | 39 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_vl",
"image-to-text",
"multimodal",
"qwen",
"qwen2",
"unsloth",
"vision",
"mlx",
"image-text-to-text",
"conversational",
"en",
"base_model:Qwen/Qwen2-VL-2B-Instruct",
"base_model:quantized:Qwen/Qwen2-VL-2B-Instruct",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"region:us"
] |
image-text-to-text
| 2025-05-28T07:19:57Z |
---
base_model: Qwen/Qwen2-VL-2B-Instruct
language:
- en
library_name: transformers
pipeline_tag: image-text-to-text
license: apache-2.0
tags:
- multimodal
- qwen
- qwen2
- unsloth
- transformers
- vision
- mlx
---
# EZCon/Qwen2-VL-2B-Instruct-4bit-mlx
This model was converted to MLX format from [`unsloth/Qwen2-VL-2B-Instruct`]() using mlx-vlm version **0.3.2**.
Refer to the [original model card](https://huggingface.co/unsloth/Qwen2-VL-2B-Instruct) for more details on the model.
## Use with mlx
```bash
pip install -U mlx-vlm
```
```bash
python -m mlx_vlm.generate --model EZCon/Qwen2-VL-2B-Instruct-4bit-mlx --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image>
```
|
Muapi/lunar-noir-style-lora-flux-pony
|
Muapi
| 2025-08-19T17:02:10Z | 0 | 0 | null |
[
"lora",
"stable-diffusion",
"flux.1-d",
"license:openrail++",
"region:us"
] | null | 2025-08-19T17:01:27Z |
---
license: openrail++
tags:
- lora
- stable-diffusion
- flux.1-d
model_type: LoRA
---
# Lunar Noir Style - Lora Flux | Pony

**Base model**: Flux.1 D
**Trained words**: noir comic style with grey with red accents hues,
## 🧠 Usage (Python)
🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys)
```python
import requests, os
url = "https://api.muapi.ai/api/v1/flux_dev_lora_image"
headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")}
payload = {
"prompt": "masterpiece, best quality, 1girl, looking at viewer",
"model_id": [{"model": "civitai:709710@793824", "weight": 1.0}],
"width": 1024,
"height": 1024,
"num_images": 1
}
print(requests.post(url, headers=headers, json=payload).json())
```
|
zenqqq/blockassist-bc-restless_reptilian_caterpillar_1755622778
|
zenqqq
| 2025-08-19T17:01:03Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"restless reptilian caterpillar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T17:00:51Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- restless reptilian caterpillar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
EZCon/Qwen2.5-VL-3B-Instruct-abliterated-8bit-mlx
|
EZCon
| 2025-08-19T17:00:21Z | 216 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-to-text",
"multimodal",
"abliterated",
"uncensored",
"mlx",
"image-text-to-text",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-VL-3B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-VL-3B-Instruct",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"region:us"
] |
image-text-to-text
| 2025-05-15T02:33:08Z |
---
license_name: qwen-research
license_link: https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct/blob/main/LICENSE
language:
- en
pipeline_tag: image-text-to-text
tags:
- multimodal
- abliterated
- uncensored
- mlx
library_name: transformers
base_model:
- Qwen/Qwen2.5-VL-3B-Instruct
---
# EZCon/Qwen2.5-VL-3B-Instruct-abliterated-8bit-mlx
This model was converted to MLX format from [`huihui-ai/Qwen2.5-VL-3B-Instruct-abliterated`]() using mlx-vlm version **0.3.2**.
Refer to the [original model card](https://huggingface.co/huihui-ai/Qwen2.5-VL-3B-Instruct-abliterated) for more details on the model.
## Use with mlx
```bash
pip install -U mlx-vlm
```
```bash
python -m mlx_vlm.generate --model EZCon/Qwen2.5-VL-3B-Instruct-abliterated-8bit-mlx --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image>
```
|
Subham-001/llama3.2_1B_emotion
|
Subham-001
| 2025-08-19T17:00:05Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T16:59:01Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
EZCon/Qwen2.5-VL-3B-Instruct-abliterated-mlx
|
EZCon
| 2025-08-19T16:59:21Z | 26 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-to-text",
"multimodal",
"abliterated",
"uncensored",
"mlx",
"image-text-to-text",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-VL-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-3B-Instruct",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-08-02T04:19:36Z |
---
license_name: qwen-research
license_link: https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct/blob/main/LICENSE
language:
- en
pipeline_tag: image-text-to-text
tags:
- multimodal
- abliterated
- uncensored
- mlx
library_name: transformers
base_model:
- Qwen/Qwen2.5-VL-3B-Instruct
---
# EZCon/Qwen2.5-VL-3B-Instruct-abliterated-mlx
This model was converted to MLX format from [`huihui-ai/Qwen2.5-VL-3B-Instruct-abliterated`]() using mlx-vlm version **0.3.2**.
Refer to the [original model card](https://huggingface.co/huihui-ai/Qwen2.5-VL-3B-Instruct-abliterated) for more details on the model.
## Use with mlx
```bash
pip install -U mlx-vlm
```
```bash
python -m mlx_vlm.generate --model EZCon/Qwen2.5-VL-3B-Instruct-abliterated-mlx --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image>
```
|
aleebaster/blockassist-bc-sly_eager_boar_1755621210
|
aleebaster
| 2025-08-19T16:57:59Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sly eager boar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T16:57:53Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sly eager boar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
VIDEOS-19-Dr-Eman-viral-video-Clip/New.full.videos.Dr.Eman.Viral.Video.Official.Tutorial
|
VIDEOS-19-Dr-Eman-viral-video-Clip
| 2025-08-19T16:56:45Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T16:56:35Z |
[](https://tinyurl.com/bdk3zxvb)
|
EZCon/LFM2-VL-1.6B-8bit-mlx
|
EZCon
| 2025-08-19T16:56:11Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"lfm2-vl",
"image-text-to-text",
"liquid",
"lfm2",
"edge",
"mlx",
"conversational",
"custom_code",
"en",
"license:other",
"8-bit",
"region:us"
] |
image-text-to-text
| 2025-08-17T16:15:12Z |
---
library_name: transformers
license: other
license_name: lfm1.0
license_link: LICENSE
language:
- en
pipeline_tag: image-text-to-text
tags:
- liquid
- lfm2
- lfm2-vl
- edge
- mlx
---
# EZCon/LFM2-VL-1.6B-8bit-mlx
This model was converted to MLX format from [`LiquidAI/LFM2-VL-1.6B`]() using mlx-vlm version **0.3.2**.
Refer to the [original model card](https://huggingface.co/LiquidAI/LFM2-VL-1.6B) for more details on the model.
## Use with mlx
```bash
pip install -U mlx-vlm
```
```bash
python -m mlx_vlm.generate --model EZCon/LFM2-VL-1.6B-8bit-mlx --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image>
```
|
EZCon/SmolVLM2-500M-Video-Instruct-4bit-mlx
|
EZCon
| 2025-08-19T16:55:06Z | 30 | 0 |
transformers
|
[
"transformers",
"safetensors",
"smolvlm",
"image-text-to-text",
"mlx",
"conversational",
"en",
"dataset:HuggingFaceM4/the_cauldron",
"dataset:HuggingFaceM4/Docmatix",
"dataset:lmms-lab/LLaVA-OneVision-Data",
"dataset:lmms-lab/M4-Instruct-Data",
"dataset:HuggingFaceFV/finevideo",
"dataset:MAmmoTH-VL/MAmmoTH-VL-Instruct-12M",
"dataset:lmms-lab/LLaVA-Video-178K",
"dataset:orrzohar/Video-STaR",
"dataset:Mutonix/Vript",
"dataset:TIGER-Lab/VISTA-400K",
"dataset:Enxin/MovieChat-1K_train",
"dataset:ShareGPT4Video/ShareGPT4Video",
"base_model:HuggingFaceTB/SmolVLM-500M-Instruct",
"base_model:quantized:HuggingFaceTB/SmolVLM-500M-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"4-bit",
"region:us"
] |
image-text-to-text
| 2025-08-01T02:49:35Z |
---
library_name: transformers
license: apache-2.0
datasets:
- HuggingFaceM4/the_cauldron
- HuggingFaceM4/Docmatix
- lmms-lab/LLaVA-OneVision-Data
- lmms-lab/M4-Instruct-Data
- HuggingFaceFV/finevideo
- MAmmoTH-VL/MAmmoTH-VL-Instruct-12M
- lmms-lab/LLaVA-Video-178K
- orrzohar/Video-STaR
- Mutonix/Vript
- TIGER-Lab/VISTA-400K
- Enxin/MovieChat-1K_train
- ShareGPT4Video/ShareGPT4Video
pipeline_tag: image-text-to-text
language:
- en
base_model:
- HuggingFaceTB/SmolVLM-500M-Instruct
tags:
- mlx
---
# EZCon/SmolVLM2-500M-Video-Instruct-4bit-mlx
This model was converted to MLX format from [`HuggingFaceTB/SmolVLM2-500M-Video-Instruct`]() using mlx-vlm version **0.3.2**.
Refer to the [original model card](https://huggingface.co/HuggingFaceTB/SmolVLM2-500M-Video-Instruct) for more details on the model.
## Use with mlx
```bash
pip install -U mlx-vlm
```
```bash
python -m mlx_vlm.generate --model EZCon/SmolVLM2-500M-Video-Instruct-4bit-mlx --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image>
```
|
hakimjustbao/blockassist-bc-raging_subtle_wasp_1755620903
|
hakimjustbao
| 2025-08-19T16:55:00Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"raging subtle wasp",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T16:54:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- raging subtle wasp
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
RTannous/gpt-oss-finetuned
|
RTannous
| 2025-08-19T16:53:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_oss",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gpt-oss-20b-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T15:05:27Z |
---
base_model: unsloth/gpt-oss-20b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gpt_oss
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** RTannous
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gpt-oss-20b-unsloth-bnb-4bit
This gpt_oss model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
New-Clip-prabh-viral-videos/New.full.videos.prabh.Viral.Video.Official.Tutorial
|
New-Clip-prabh-viral-videos
| 2025-08-19T16:52:15Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T16:51:29Z |
[](https://tinyurl.com/bdk3zxvb)
|
thanobidex/blockassist-bc-colorful_shiny_hare_1755620672
|
thanobidex
| 2025-08-19T16:51:54Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"colorful shiny hare",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T16:51:51Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- colorful shiny hare
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
pempekmangedd/blockassist-bc-patterned_sturdy_dolphin_1755620601
|
pempekmangedd
| 2025-08-19T16:51:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"patterned sturdy dolphin",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T16:50:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- patterned sturdy dolphin
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
maximebiz/HORIANA_LoRa
|
maximebiz
| 2025-08-19T16:50:21Z | 0 | 1 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-08-19T16:50:21Z |
---
license: creativeml-openrail-m
---
|
yasamanhaghbin/speechCura_medGemma_num_epoch_4_loraWeights
|
yasamanhaghbin
| 2025-08-19T16:47:38Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T16:35:25Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
quantumxnode/blockassist-bc-dormant_peckish_seahorse_1755620416
|
quantumxnode
| 2025-08-19T16:46:56Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"dormant peckish seahorse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T16:46:52Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- dormant peckish seahorse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
flockingalpha/task-13-Qwen-Qwen2.5-3B-Instruct
|
flockingalpha
| 2025-08-19T16:46:32Z | 59 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-3B-Instruct",
"region:us"
] | null | 2025-08-12T21:51:07Z |
---
base_model: Qwen/Qwen2.5-3B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
OleksandrLitke/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-ferocious_agile_giraffe
|
OleksandrLitke
| 2025-08-19T16:46:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am ferocious_agile_giraffe",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T11:18:57Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am ferocious_agile_giraffe
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
18-Milica-y-Angel-David-debut-video-Clips/18.ver.video.milica.y.angel.david.debut.filtrado.clip.viral.completo
|
18-Milica-y-Angel-David-debut-video-Clips
| 2025-08-19T16:45:41Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-19T16:41:35Z |
[🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )](https://videohere.top/?Milica-y-Angel)
[►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️](https://videohere.top/?Milica-y-Angel)
[<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?Milica-y-Angel)
|
Arpita1/sbs_convai2_dialogpt
|
Arpita1
| 2025-08-19T16:44:00Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"en",
"arxiv:2508.06886",
"base_model:microsoft/DialoGPT-small",
"base_model:finetune:microsoft/DialoGPT-small",
"license:cc-by-4.0",
"region:us"
] | null | 2025-08-19T16:41:35Z |
---
license: cc-by-4.0
language:
- en
base_model:
- microsoft/DialoGPT-small
---
# Model Card
### Description
DialoGPT-small finetuned on [ConvAI2](https://parl.ai/projects/convai2/) using the [SBS framework](https://arpita2512.github.io/score_before_you_speak/).
- **Repository:** [GitHub](https://github.com/arpita2512/score_before_you_speak)
- **Paper:** [https://arxiv.org/abs/2508.06886](https://arxiv.org/abs/2508.06886)
- **Funded by:** UKRI AI-Medical CDT (Grant Reference: EP/S024336/1)
- **Language(s) (NLP):** English
- **License:** CC-BY-4.0
## BibTeX
```
@inproceedings{saggar2025,
author = {Saggar, Arpita and Darling, Jonathan C. and Dimitrova, Vania and Sarikaya, Duygu and Hogg, David C.},
title = {Score Before You Speak: Improving Persona Consistency in Dialogue Generation using Response Quality Scores},
booktitle = {Proceedings of the 28th European Conference on Artificial Intelligence},
year = {2025},
}
```
|
AnonymousCS/xlmr_swedish_immigration2
|
AnonymousCS
| 2025-08-19T16:43:46Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-19T16:40:47Z |
---
library_name: transformers
license: mit
base_model: FacebookAI/xlm-roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: xlmr_swedish_immigration2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmr_swedish_immigration2
This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4718
- Accuracy: 0.8462
- 1-f1: 0.7917
- 1-recall: 0.8837
- 1-precision: 0.7170
- Balanced Acc: 0.8557
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:|
| 0.368 | 1.0 | 5 | 0.3452 | 0.8615 | 0.7353 | 0.5814 | 1.0 | 0.7907 |
| 0.2416 | 2.0 | 10 | 0.3232 | 0.8538 | 0.7865 | 0.8140 | 0.7609 | 0.8438 |
| 0.3117 | 3.0 | 15 | 0.2919 | 0.8846 | 0.8148 | 0.7674 | 0.8684 | 0.8550 |
| 0.1611 | 4.0 | 20 | 0.3034 | 0.8923 | 0.8205 | 0.7442 | 0.9143 | 0.8549 |
| 0.2353 | 5.0 | 25 | 0.4718 | 0.8462 | 0.7917 | 0.8837 | 0.7170 | 0.8557 |
### Framework versions
- Transformers 4.56.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
helmutsukocok/blockassist-bc-loud_scavenging_kangaroo_1755620068
|
helmutsukocok
| 2025-08-19T16:43:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"loud scavenging kangaroo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T16:43:41Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- loud scavenging kangaroo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mlx-community/dolphin3.0-llama3.2-1B-4Bit
|
mlx-community
| 2025-08-19T16:43:39Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"llama",
"en",
"dataset:OpenCoder-LLM/opc-sft-stage1",
"dataset:OpenCoder-LLM/opc-sft-stage2",
"dataset:microsoft/orca-agentinstruct-1M-v1",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:NousResearch/hermes-function-calling-v1",
"dataset:AI-MO/NuminaMath-CoT",
"dataset:AI-MO/NuminaMath-TIR",
"dataset:allenai/tulu-3-sft-mixture",
"dataset:cognitivecomputations/dolphin-coder",
"dataset:HuggingFaceTB/smoltalk",
"dataset:cognitivecomputations/samantha-data",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"dataset:m-a-p/Code-Feedback",
"base_model:dphn/Dolphin3.0-Llama3.2-1B",
"base_model:quantized:dphn/Dolphin3.0-Llama3.2-1B",
"license:llama3.2",
"4-bit",
"region:us"
] | null | 2025-08-19T16:43:32Z |
---
license: llama3.2
datasets:
- OpenCoder-LLM/opc-sft-stage1
- OpenCoder-LLM/opc-sft-stage2
- microsoft/orca-agentinstruct-1M-v1
- microsoft/orca-math-word-problems-200k
- NousResearch/hermes-function-calling-v1
- AI-MO/NuminaMath-CoT
- AI-MO/NuminaMath-TIR
- allenai/tulu-3-sft-mixture
- cognitivecomputations/dolphin-coder
- HuggingFaceTB/smoltalk
- cognitivecomputations/samantha-data
- m-a-p/CodeFeedback-Filtered-Instruction
- m-a-p/Code-Feedback
language:
- en
base_model: dphn/Dolphin3.0-Llama3.2-1B
tags:
- mlx
---
# adrgrondin/Dolphin3.0-Llama3.2-1B-mlx-4Bit
The Model [adrgrondin/Dolphin3.0-Llama3.2-1B-mlx-4Bit](https://huggingface.co/adrgrondin/Dolphin3.0-Llama3.2-1B-mlx-4Bit) was converted to MLX format from [dphn/Dolphin3.0-Llama3.2-1B](https://huggingface.co/dphn/Dolphin3.0-Llama3.2-1B) using mlx-lm version **0.26.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("adrgrondin/Dolphin3.0-Llama3.2-1B-mlx-4Bit")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1755620608
|
Sayemahsjn
| 2025-08-19T16:43:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T16:43:01Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
fusi0n/llmon-tools-large-Q6_K-GGUF
|
fusi0n
| 2025-08-19T16:42:35Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"generated_from_trainer",
"sft",
"trl",
"llama-cpp",
"gguf-my-repo",
"base_model:fusi0n/llmon-tools-large",
"base_model:quantized:fusi0n/llmon-tools-large",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T16:42:30Z |
---
base_model: fusi0n/llmon-tools-large
library_name: transformers
model_name: llmon-tools-large
tags:
- generated_from_trainer
- sft
- trl
- llama-cpp
- gguf-my-repo
licence: license
---
# fusi0n/llmon-tools-large-Q6_K-GGUF
This model was converted to GGUF format from [`fusi0n/llmon-tools-large`](https://huggingface.co/fusi0n/llmon-tools-large) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/fusi0n/llmon-tools-large) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo fusi0n/llmon-tools-large-Q6_K-GGUF --hf-file llmon-tools-large-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo fusi0n/llmon-tools-large-Q6_K-GGUF --hf-file llmon-tools-large-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo fusi0n/llmon-tools-large-Q6_K-GGUF --hf-file llmon-tools-large-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo fusi0n/llmon-tools-large-Q6_K-GGUF --hf-file llmon-tools-large-q6_k.gguf -c 2048
```
|
grgazziz/model
|
grgazziz
| 2025-08-19T16:41:42Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-08-19T16:41:42Z |
---
license: other
license_name: other
license_link: LICENSE
---
|
Tavernari/git-commit-message-splitter-Qwen3-4B-Q4_K_M-GGUF
|
Tavernari
| 2025-08-19T16:41:23Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen3",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:Tavernari/git-commit-message-splitter-Qwen3-4B",
"base_model:quantized:Tavernari/git-commit-message-splitter-Qwen3-4B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T13:13:26Z |
---
base_model: Tavernari/git-commit-message-splitter-Qwen3-4B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- llama-cpp
- gguf-my-repo
license: apache-2.0
language:
- en
---
# Tavernari/git-commit-message-splitter-Qwen3-4B-Q4_K_M-GGUF
This model was converted to GGUF format from [`Tavernari/git-commit-message-splitter-Qwen3-4B`](https://huggingface.co/Tavernari/git-commit-message-splitter-Qwen3-4B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Tavernari/git-commit-message-splitter-Qwen3-4B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Tavernari/git-commit-message-splitter-Qwen3-4B-Q4_K_M-GGUF --hf-file git-commit-message-splitter-qwen3-4b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Tavernari/git-commit-message-splitter-Qwen3-4B-Q4_K_M-GGUF --hf-file git-commit-message-splitter-qwen3-4b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Tavernari/git-commit-message-splitter-Qwen3-4B-Q4_K_M-GGUF --hf-file git-commit-message-splitter-qwen3-4b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Tavernari/git-commit-message-splitter-Qwen3-4B-Q4_K_M-GGUF --hf-file git-commit-message-splitter-qwen3-4b-q4_k_m.gguf -c 2048
```
|
phospho-app/Deimos252-ACT_BBOX-Light_dataset_deimos-6r50d
|
phospho-app
| 2025-08-19T16:40:13Z | 0 | 0 |
phosphobot
|
[
"phosphobot",
"safetensors",
"act",
"robotics",
"dataset:phospho-app/Light_dataset_deimos_bboxes",
"region:us"
] |
robotics
| 2025-08-19T16:15:06Z |
---
datasets: phospho-app/Light_dataset_deimos_bboxes
library_name: phosphobot
pipeline_tag: robotics
model_name: act
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successful, try it out on your robot!
## Training parameters:
- **Dataset**: [phospho-app/Light_dataset_deimos_bboxes](https://huggingface.co/datasets/phospho-app/Light_dataset_deimos_bboxes)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 100
- **Training steps**: 10000
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
joackimagno/MASID-v1
|
joackimagno
| 2025-08-19T16:39:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"en",
"base_model:joackimagno/Qwen-2.5-General-Recipe-Generation",
"base_model:finetune:joackimagno/Qwen-2.5-General-Recipe-Generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T16:27:29Z |
---
base_model: joackimagno/Qwen-2.5-General-Recipe-Generation
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** joackimagno
- **License:** apache-2.0
- **Finetuned from model :** joackimagno/Qwen-2.5-General-Recipe-Generation
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Guilherme34/Maya-Q3_K_L-GGUF
|
Guilherme34
| 2025-08-19T16:39:38Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-to-speech",
"en",
"base_model:Guilherme34/Maya",
"base_model:quantized:Guilherme34/Maya",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-to-speech
| 2025-08-19T16:39:28Z |
---
library_name: transformers
language:
- en
pipeline_tag: text-to-speech
license: apache-2.0
base_model: Guilherme34/Maya
tags:
- llama-cpp
- gguf-my-repo
---
# Guilherme34/Maya-Q3_K_L-GGUF
This model was converted to GGUF format from [`Guilherme34/Maya`](https://huggingface.co/Guilherme34/Maya) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Guilherme34/Maya) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Guilherme34/Maya-Q3_K_L-GGUF --hf-file maya-q3_k_l.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Guilherme34/Maya-Q3_K_L-GGUF --hf-file maya-q3_k_l.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Guilherme34/Maya-Q3_K_L-GGUF --hf-file maya-q3_k_l.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Guilherme34/Maya-Q3_K_L-GGUF --hf-file maya-q3_k_l.gguf -c 2048
```
|
fengpeisheng1/mergekit-slerp-ariyvyf-IQ4_NL-GGUF
|
fengpeisheng1
| 2025-08-19T16:38:28Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:fengpeisheng1/mergekit-slerp-ariyvyf",
"base_model:quantized:fengpeisheng1/mergekit-slerp-ariyvyf",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-19T16:30:50Z |
---
base_model: fengpeisheng1/mergekit-slerp-ariyvyf
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# fengpeisheng1/mergekit-slerp-ariyvyf-IQ4_NL-GGUF
This model was converted to GGUF format from [`fengpeisheng1/mergekit-slerp-ariyvyf`](https://huggingface.co/fengpeisheng1/mergekit-slerp-ariyvyf) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/fengpeisheng1/mergekit-slerp-ariyvyf) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo fengpeisheng1/mergekit-slerp-ariyvyf-IQ4_NL-GGUF --hf-file mergekit-slerp-ariyvyf-iq4_nl-imat.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo fengpeisheng1/mergekit-slerp-ariyvyf-IQ4_NL-GGUF --hf-file mergekit-slerp-ariyvyf-iq4_nl-imat.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo fengpeisheng1/mergekit-slerp-ariyvyf-IQ4_NL-GGUF --hf-file mergekit-slerp-ariyvyf-iq4_nl-imat.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo fengpeisheng1/mergekit-slerp-ariyvyf-IQ4_NL-GGUF --hf-file mergekit-slerp-ariyvyf-iq4_nl-imat.gguf -c 2048
```
|
mohan1201/gemma-2b-code-explainer-v1
|
mohan1201
| 2025-08-19T16:38:08Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T15:04:11Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
exala/db_auto_6.1.2e
|
exala
| 2025-08-19T16:37:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-19T16:37:45Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
concept-unlearning/Meta-Llama-3-8B_ft_lora_all_novels_v4_ft_npo_gdr_lora_positive_dataset_v4
|
concept-unlearning
| 2025-08-19T16:37:02Z | 1 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-01-08T12:21:54Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
yookty/blockassist-bc-stinky_webbed_gecko_1755621407
|
yookty
| 2025-08-19T16:36:57Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stinky webbed gecko",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T16:36:48Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stinky webbed gecko
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
chukypedro/RS1BF_hausa_female_18-29-V2
|
chukypedro
| 2025-08-19T16:36:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/orpheus-3b-0.1-ft",
"base_model:finetune:unsloth/orpheus-3b-0.1-ft",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T16:17:53Z |
---
base_model: unsloth/orpheus-3b-0.1-ft
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** chukypedro
- **License:** apache-2.0
- **Finetuned from model :** unsloth/orpheus-3b-0.1-ft
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/gemma3-4b-skin-cancer-classifier-GGUF
|
mradermacher
| 2025-08-19T16:33:58Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:doriankim/gemma3-4b-skin-cancer-classifier",
"base_model:quantized:doriankim/gemma3-4b-skin-cancer-classifier",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-19T16:17:31Z |
---
base_model: doriankim/gemma3-4b-skin-cancer-classifier
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/doriankim/gemma3-4b-skin-cancer-classifier
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#gemma3-4b-skin-cancer-classifier-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/gemma3-4b-skin-cancer-classifier-GGUF/resolve/main/gemma3-4b-skin-cancer-classifier.mmproj-Q8_0.gguf) | mmproj-Q8_0 | 0.7 | multi-modal supplement |
| [GGUF](https://huggingface.co/mradermacher/gemma3-4b-skin-cancer-classifier-GGUF/resolve/main/gemma3-4b-skin-cancer-classifier.mmproj-f16.gguf) | mmproj-f16 | 1.0 | multi-modal supplement |
| [GGUF](https://huggingface.co/mradermacher/gemma3-4b-skin-cancer-classifier-GGUF/resolve/main/gemma3-4b-skin-cancer-classifier.Q2_K.gguf) | Q2_K | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/gemma3-4b-skin-cancer-classifier-GGUF/resolve/main/gemma3-4b-skin-cancer-classifier.Q3_K_S.gguf) | Q3_K_S | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/gemma3-4b-skin-cancer-classifier-GGUF/resolve/main/gemma3-4b-skin-cancer-classifier.Q3_K_M.gguf) | Q3_K_M | 2.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/gemma3-4b-skin-cancer-classifier-GGUF/resolve/main/gemma3-4b-skin-cancer-classifier.Q3_K_L.gguf) | Q3_K_L | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/gemma3-4b-skin-cancer-classifier-GGUF/resolve/main/gemma3-4b-skin-cancer-classifier.IQ4_XS.gguf) | IQ4_XS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/gemma3-4b-skin-cancer-classifier-GGUF/resolve/main/gemma3-4b-skin-cancer-classifier.Q4_K_S.gguf) | Q4_K_S | 2.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/gemma3-4b-skin-cancer-classifier-GGUF/resolve/main/gemma3-4b-skin-cancer-classifier.Q4_K_M.gguf) | Q4_K_M | 2.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/gemma3-4b-skin-cancer-classifier-GGUF/resolve/main/gemma3-4b-skin-cancer-classifier.Q5_K_S.gguf) | Q5_K_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/gemma3-4b-skin-cancer-classifier-GGUF/resolve/main/gemma3-4b-skin-cancer-classifier.Q5_K_M.gguf) | Q5_K_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/gemma3-4b-skin-cancer-classifier-GGUF/resolve/main/gemma3-4b-skin-cancer-classifier.Q6_K.gguf) | Q6_K | 3.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/gemma3-4b-skin-cancer-classifier-GGUF/resolve/main/gemma3-4b-skin-cancer-classifier.Q8_0.gguf) | Q8_0 | 4.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/gemma3-4b-skin-cancer-classifier-GGUF/resolve/main/gemma3-4b-skin-cancer-classifier.f16.gguf) | f16 | 7.9 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
AnonymousCS/xlmr_norwegian_immigration2
|
AnonymousCS
| 2025-08-19T16:32:46Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-19T16:23:06Z |
---
library_name: transformers
license: mit
base_model: FacebookAI/xlm-roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: xlmr_norwegian_immigration2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmr_norwegian_immigration2
This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2207
- Accuracy: 0.9231
- 1-f1: 0.8810
- 1-recall: 0.8605
- 1-precision: 0.9024
- Balanced Acc: 0.9072
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | 1-f1 | 1-recall | 1-precision | Balanced Acc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------:|:-----------:|:------------:|
| 0.6746 | 1.0 | 5 | 0.6397 | 0.6692 | 0.0 | 0.0 | 0.0 | 0.5 |
| 0.5485 | 2.0 | 10 | 0.6313 | 0.6692 | 0.0 | 0.0 | 0.0 | 0.5 |
| 0.6165 | 3.0 | 15 | 0.6220 | 0.6692 | 0.0 | 0.0 | 0.0 | 0.5 |
| 0.7306 | 4.0 | 20 | 0.6108 | 0.6692 | 0.0 | 0.0 | 0.0 | 0.5 |
| 0.604 | 5.0 | 25 | 0.5968 | 0.6692 | 0.0 | 0.0 | 0.0 | 0.5 |
| 0.5031 | 6.0 | 30 | 0.5714 | 0.6692 | 0.0 | 0.0 | 0.0 | 0.5 |
| 0.5496 | 7.0 | 35 | 0.5302 | 0.6692 | 0.0 | 0.0 | 0.0 | 0.5 |
| 0.5351 | 8.0 | 40 | 0.4655 | 0.7769 | 0.4912 | 0.3256 | 1.0 | 0.6628 |
| 0.4308 | 9.0 | 45 | 0.3942 | 0.8538 | 0.7246 | 0.5814 | 0.9615 | 0.7850 |
| 0.3575 | 10.0 | 50 | 0.3077 | 0.9231 | 0.8780 | 0.8372 | 0.9231 | 0.9014 |
| 0.2808 | 11.0 | 55 | 0.2337 | 0.9308 | 0.8861 | 0.8140 | 0.9722 | 0.9012 |
| 0.2272 | 12.0 | 60 | 0.2053 | 0.9308 | 0.8889 | 0.8372 | 0.9474 | 0.9071 |
| 0.2462 | 13.0 | 65 | 0.2418 | 0.9 | 0.8539 | 0.8837 | 0.8261 | 0.8959 |
| 0.1188 | 14.0 | 70 | 0.2207 | 0.9231 | 0.8810 | 0.8605 | 0.9024 | 0.9072 |
### Framework versions
- Transformers 4.56.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
Dejiat/blockassist-bc-savage_unseen_bobcat_1755621137
|
Dejiat
| 2025-08-19T16:32:43Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"savage unseen bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T16:32:41Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- savage unseen bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Abdu07/multitask-model
|
Abdu07
| 2025-08-19T16:30:41Z | 0 | 1 | null |
[
"image-classification",
"dataset:Hemg/AI-Generated-vs-Real-Images-Datasets",
"base_model:microsoft/resnet-50",
"base_model:finetune:microsoft/resnet-50",
"region:us"
] |
image-classification
| 2025-03-25T21:10:56Z |
---
datasets:
- Hemg/AI-Generated-vs-Real-Images-Datasets
metrics:
- accuracy
base_model:
- microsoft/resnet-50
pipeline_tag: image-classification
---
# DualSight: A Multi-Task Image Classifier for Object Recognition and Authenticity Verification
## Model Overview
This model is a **Multi-Task Image Classifier** that performs two tasks simultaneously:
1. **Object Recognition:** Identifies the primary objects in an image (e.g., "cat," "dog," "car," etc.) using pseudo-labels generated through a YOLO-based object detection approach.
2. **Authenticity Classification:** Determines whether the image is AI-generated or a real photograph.
The model uses a **ResNet-50** backbone with two heads: one for multi-class object recognition and another for binary classification (AI-generated vs. Real). It was trained on a subset of the [Hemg/AI-Generated-vs-Real-Images-Datasets](https://huggingface.co/datasets/Hemg/AI-Generated-vs-Real-Images-Datasets) and leverages YOLO for improved pseudo-labeling across the entire dataset.
## Model Details
- **Trained by:** [Abdellahi El Moustapha](https://abmstpha.github.io/)
- **Programming Language:** Python
- **Base Model:** ResNet-50
- **Datasets:** Hemg/AI-Generated-vs-Real-Images-Datasets
- **Library:** PyTorch
- **Pipeline Tag:** image-classification
- **Metrics:** Accuracy for both binary classification and multi-class object recognition
- **Version:** v1.0
## Intended Use
This model is designed for:
- **Digital Content Verification:** Detecting AI-generated images to help prevent misinformation.
- **Social Media Moderation:** Automatically flagging images that are likely AI-generated.
- **Content Analysis:** Assisting researchers in understanding the prevalence of AI art versus real images in digital media.
## How to Use
You can use this model locally or via the provided Hugging Face Space. For local usage, load the state dictionary into the model architecture using PyTorch. For example:
```python
import torch
from model import MultiTaskModel # Your model definition
# Instantiate your model architecture (must match training)
model = MultiTaskModel(...)
# Load the saved state dictionary (trained weights)
model.load_state_dict(torch.load("DualSight.pth", map_location="cpu"))
model.eval()
```
Alternatively, you can test the model directly via our interactive demo:
[Test the Model Here(CLICK)](https://huggingface.co/spaces/Abdu07/DualSight-Demo)
## Training Data and Evaluation
- **Dataset:** The model was trained on a subset of the [Hemg/AI-Generated-vs-Real-Images-Datasets](https://huggingface.co/datasets/Hemg/AI-Generated-vs-Real-Images-Datasets) comprising approximately 152k images.
- **Metrics:**
- **Authenticity (AI vs. Real):** Validation accuracy reached around 85% after early epochs.
- **Object Recognition:** Pseudo-label accuracy started at around 38–40% and improved during training.
- **Evaluation:** Detailed evaluation metrics and loss curves are available in our training logs.
## Limitations and Ethical Considerations
- **Pseudo-Labeling:** The object recognition task uses pseudo-labels generated from a pretrained model, which may introduce noise or bias.
- **Authenticity Sensitivity:** The binary classifier may face challenges with highly realistic AI-generated images.
- **Usage:** This model is intended for research and prototyping purposes. Additional validation is recommended before deploying in high-stakes applications.
## How to Cite
If you use this model, please cite:
```bibtex
@misc{multitask_classifier,
title={Multi-Task Image Classifier},
author={Abdellahi El Moustapha},
year={2025},
howpublished={\url{https://huggingface.co/Abdu07/multitask-model}}
}
```
|
kyoukarawattsu/blockassist-bc-tenacious_arctic_manatee_1755620807
|
kyoukarawattsu
| 2025-08-19T16:28:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tenacious arctic manatee",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T16:28:17Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tenacious arctic manatee
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lguaman/MyManufacturingData
|
lguaman
| 2025-08-19T16:24:42Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T14:09:08Z |
---
base_model: google/gemma-3-270m-it
library_name: transformers
model_name: MyManufacturingData
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for MyManufacturingData
This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="lguaman/MyManufacturingData", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.6.0+cu124
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
hakimjustbao/blockassist-bc-raging_subtle_wasp_1755619041
|
hakimjustbao
| 2025-08-19T16:24:20Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"raging subtle wasp",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T16:24:17Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- raging subtle wasp
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
katanyasekolah/blockassist-bc-silky_sprightly_cassowary_1755618877
|
katanyasekolah
| 2025-08-19T16:23:56Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"silky sprightly cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T16:23:53Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- silky sprightly cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Arpita1/sbs_personachat_dialogpt
|
Arpita1
| 2025-08-19T16:23:16Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"en",
"arxiv:2508.06886",
"base_model:microsoft/DialoGPT-small",
"base_model:finetune:microsoft/DialoGPT-small",
"license:cc-by-4.0",
"region:us"
] | null | 2025-08-19T16:09:43Z |
---
license: cc-by-4.0
language:
- en
base_model:
- microsoft/DialoGPT-small
---
# Model Card
### Description
DialoGPT-small finetuned on [PersonaChat](https://parl.ai/projects/personachat/) using the [SBS framework](https://arpita2512.github.io/score_before_you_speak/).
- **Repository:** [GitHub](https://github.com/arpita2512/score_before_you_speak)
- **Paper:** [https://arxiv.org/abs/2508.06886](https://arxiv.org/abs/2508.06886)
- **Funded by:** UKRI AI-Medical CDT (Grant Reference: EP/S024336/1)
- **Language(s) (NLP):** English
- **License:** CC-BY-4.0
## BibTeX
```
@inproceedings{saggar2025,
author = {Saggar, Arpita and Darling, Jonathan C. and Dimitrova, Vania and Sarikaya, Duygu and Hogg, David C.},
title = {Score Before You Speak: Improving Persona Consistency in Dialogue Generation using Response Quality Scores},
booktitle = {Proceedings of the 28th European Conference on Artificial Intelligence},
year = {2025},
}
```
|
grgazziz/mosquito
|
grgazziz
| 2025-08-19T16:22:41Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-08-19T16:21:02Z |
---
license: other
license_name: other
license_link: LICENSE
---
|
arshal13/echomimic-models
|
arshal13
| 2025-08-19T16:21:24Z | 0 | 0 | null |
[
"dataset:fka/awesome-chatgpt-prompts",
"base_model:openai/gpt-oss-120b",
"base_model:finetune:openai/gpt-oss-120b",
"license:apache-2.0",
"region:us"
] | null | 2025-08-19T16:15:45Z |
---
license: apache-2.0
datasets:
- fka/awesome-chatgpt-prompts
base_model:
- openai/gpt-oss-120b
---
|
ahmedheakl/iter0_mm_llamafactory_20250819_201744
|
ahmedheakl
| 2025-08-19T16:20:38Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-VL-3B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-VL-3B-Instruct",
"region:us"
] | null | 2025-08-19T16:19:21Z |
---
library_name: peft
base_model: Qwen/Qwen2.5-VL-3B-Instruct
tags:
- llama-factory
- lora
- generated_from_trainer
model-index:
- name: iter0_mm_llamafactory_20250819_201744
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# iter0_mm_llamafactory_20250819_201744
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct) on the infographics50 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 5
- total_train_batch_size: 20
- total_eval_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.15.2
- Transformers 4.52.4
- Pytorch 2.7.0+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
oceanfish/intent_classify_slot
|
oceanfish
| 2025-08-19T16:20:03Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-7B-Instruct",
"region:us"
] | null | 2025-08-19T16:15:20Z |
---
base_model: Qwen/Qwen2.5-7B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
exala/db_auto_6.1.1
|
exala
| 2025-08-19T16:19:46Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-19T15:36:56Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Elizavr/blockassist-bc-reclusive_shaggy_bee_1755620165
|
Elizavr
| 2025-08-19T16:16:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"reclusive shaggy bee",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T16:16:40Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- reclusive shaggy bee
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.