modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-14 00:42:58
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
558 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-14 00:36:41
card
stringlengths
11
1.01M
IAAR-Shanghai/MARA_AGENTS
IAAR-Shanghai
2025-08-20T11:09:44Z
0
0
null
[ "text-generation", "en", "dataset:PKU-Alignment/PKU-SafeRLHF", "dataset:HuggingFaceH4/ultrafeedback_binarized", "dataset:Anthropic/hh-rlhf", "dataset:PKU-Alignment/BeaverTails-Evaluation", "dataset:declare-lab/HarmfulQA", "arxiv:2505.19743", "base_model:meta-llama/Llama-3.1-8B-Instruct", "base_model:finetune:meta-llama/Llama-3.1-8B-Instruct", "license:apache-2.0", "region:us" ]
text-generation
2025-05-12T06:03:36Z
--- base_model: - meta-llama/Llama-3.1-8B-Instruct - meta-llama/Llama-3.2-3B-Instruct - meta-llama/Llama-3.2-1B-Instruct - meta-llama/Meta-Llama-3-8B-Instruct - mistralai/Mistral-7B-Instruct-v0.1 - mistralai/Mistral-7B-Instruct-v0.2 - mistralai/Mistral-7B-Instruct-v0.3 datasets: - PKU-Alignment/PKU-SafeRLHF - HuggingFaceH4/ultrafeedback_binarized - Anthropic/hh-rlhf - PKU-Alignment/BeaverTails-Evaluation - declare-lab/HarmfulQA language: - en license: apache-2.0 pipeline_tag: text-generation --- <p align="center"> <img src="icons.png" alt="MARA Icon" width="50" height="50"/> </p> <h1 align="center"> MARA AGENTS </h1> <div style="display: flex; justify-content: center; gap: 10px;"> <a href="https://github.com/IAAR-Shanghai/MARA"> <img src="https://img.shields.io/badge/GitHub-Repository-blue?logo=github" alt="GitHub"/> </a> <a href="https://huggingface.co/IAAR-Shanghai/MARA_AGENTS"> <img src="https://img.shields.io/badge/🤗%20Hugging%20 Face-MARA_AGENTS-yellow" alt="Hugging Face"/> </a> <a href="https://arxiv.org/abs/2505.19743"> <img src="https://img.shields.io/badge/arXiv-Paper-8B0000?style=flat-square&logo=arxiv&logoColor=white"> </a> </div> **MARA** (Micro token-level Accept-Reject Alignment) simplifies the alignment process by breaking down sentence-level preference learning into fine-grained token-level binary classification. The MARA agent—a lightweight multi-layer perceptron (MLP)—operates as an alignment model that evaluates and classifies each candidate token as either *Accepted* or *Rejected* during LLM text generation. <figure> <img src="mara_architecture.png" alt="mara_architecture" style="display: block; margin: 0 auto;" /> <figcaption style="text-align: center;">Architecture of MARA: The alignment model performs token selection through accept-reject decisions.</figcaption> </figure> ### 💫 Get MARA Agent Align Result ```python from mara_generator import MARAGenerator agent_path = "mistral_v3_2_1_actor.pth" base_model_path = "path2model/Mistral-7B-Instruct-v0.3" mara_generator = MARAGenerator(agent_path, base_model_path) instruction = "Please introduce yourself." raw_result = mara_generator.get_raw_output(instruction, do_sample=False) print("base model answer: ") print(raw_result["answer"]) align_result = mara_generator.get_proxy_output(instruction) print("mara agent align answer: ") print(align_result["answer"]) ``` ### 🔨Train Your MARA Agent The source code and implementation details are open-sourced at [MARA](https://github.com/IAAR-Shanghai/MARA) – you can train your custom alignment model by following the provided instructions. ### 📊 Experiment Results <table class="center"> <tr> <td width=100% style="border: none"> <img src="table1.png" style="width:50%; max-width:100%;"> <div style="text-align: left; margin-top: 8px;">Performance improvements of MARA across PKUSafeRLHF, BeaverTails, and HarmfulQA datasets. Each entry shows the percentage improvement in preference rate achieved by applying MARA compared to using the original LLM alone.</div> </td> </tr> </table> <table class="center"> <tr> <td width=100% style="border: none"> <img src="table3.png" style="width:50%; max-width:100%;"> <div style="text-align: left; margin-top: 8px;">Compatibility analysis of MARA, an alignment model trained with a LLM to be aggregate with other inference LLM. The value of each cell represents the percentage improvement in preference rate of our algorithm over the upstream model, i.e., inference model.</div> </td> </tr> </table> <table class="center"> <tr> <td width=100% style="border: none"> <img src="table2.png" style="width:100%"> <div style="text-align: left; margin-top: 8px;">Performance comparison of MARA against RLHF, DPO, and Aligner measured by percentage improvements of preference rate.</div> </td> </tr> </table> More details and analyses about experimental results can be found in our [paper](https://arxiv.org/abs/2505.19743). ### ✍️ Citation If the code or the paper has been useful in your research, please add a citation to our work: ``` @article{zhang2025tokenlevelacceptrejectmicro, title={Token-level Accept or Reject: A Micro Alignment Approach for Large Language Models}, author={Yang Zhang and Yu Yu and Bo Tang and Yu Zhu and Chuxiong Sun and Wenqiang Wei and Jie Hu and Zipeng Xie and Zhiyu Li and Feiyu Xiong and Edward Chung}, journal={arXiv preprint arXiv:2505.19743}, year={2025} } ```
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1755688046
Ferdi3425
2025-08-20T11:08:38Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "amphibious deadly otter", "arxiv:2504.07091", "region:us" ]
null
2025-08-20T11:08:11Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - amphibious deadly otter --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
aralper18/blockassist-bc-gilded_tangled_albatross_1755688032
aralper18
2025-08-20T11:07:51Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "gilded tangled albatross", "arxiv:2504.07091", "region:us" ]
null
2025-08-20T11:07:42Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - gilded tangled albatross --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
quantumxnode/blockassist-bc-dormant_peckish_seahorse_1755686422
quantumxnode
2025-08-20T11:06:31Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "dormant peckish seahorse", "arxiv:2504.07091", "region:us" ]
null
2025-08-20T11:06:28Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - dormant peckish seahorse --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ver-video-intimo-de-abigail-lalama-clip/ver.filtrado.Video.de.Abigail.Lalama.y.Snayder
ver-video-intimo-de-abigail-lalama-clip
2025-08-20T11:03:33Z
0
0
null
[ "region:us" ]
null
2025-08-20T11:01:19Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/5xr5mb3e?leaked-videos/" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> Abigail Lalama viral: influencer confirma video con Snayder en Telegram El nombre Abigail Lalama se hace viral y crece en Telegram y X tras la filtración de su video con Snayder. Te contamos quiénes son y qué pasó. Imagen de Abigail Lalama viral: influencer confirma video con Snayder en Telegram Abigail confirmó la filtración del contenido. - Foto: Instagram abigail_lalama El nombre Abigail Lalama se hizo viral en redes sociales como Telegram y Twitter (ahora X) luego de que se confirmara la filtración de un video íntimo con Snayder. Esto provocó un repunte de búsquedas como “Abigail Lalama video filtrado”, “Video de Abigail y Snayder filtrado”, “Abigail Lalama Telegram”, entre otras. La confirmación llegó directamente de la influencer, lo que desencadenó una ola de comentarios, reacciones y solidaridad en línea. A continuación, te contamos quién es Abigail Lalama, quién es Snayder, qué dijo ella, y por qué este caso se viralizó tan rápido. LEA TAMBIÉN: ¿Qué dijo Mariana Botas de Drake Bell?, ¿qué contestó el cantante? View post on TikTok ¿Quién es Abigail Lalama y por qué es conocida? Abigail Lalama es una creadora de contenido ecuatoriana de 22 años, originaria de Guayaquil, que ha ganado popularidad principalmente en TikTok, donde comparte lives, retos, momentos familiares y contenidos cotidianos junto a su hermana gemela, Génesis. Su comunidad se ha consolidado gracias a su cercanía, su carisma y su estilo familiar. En TikTok, su cuenta @laoficialabigail cuenta con más de 400 000 seguidores y en Instagram supera los 173,000. Con su hermana, forma el ‘Team Lalama’, difundiendo contenidos centrados en diario vivir, maternidad y lazos familiares. ¿Quién es Snayder y qué video se filtró? La identidad precisa de Snayder no ha sido desvelada en los medios principales consultados hasta el momento, pero se sabe que es un conocido en el entorno de Abigail, supuestamente su expareja. El video filtrado, descrito como íntimo, fue subido a plataformas como Telegram y TikTok. Usuarios reportaron que la joven que aparece en la grabación viral compartía tatuajes y rasgos con Abigail Lalama. Según Abigail, el video circuló sin su consentimiento, y la filtración se la atribuye a dicho ex. View post on TikTok ¿Qué dijo Abigail Lalama sobre la filtración? En un video en vivo, visiblemente afectada y entre lágrimas, Abigail Lalama confirmó la filtración viral del contenido. Ella acusó directamente a su expareja de haber difundido ese material íntimo con la intención de incomodar a su nueva relación.
milliarderdol/blockassist-bc-roaring_rough_scorpion_1755685708
milliarderdol
2025-08-20T11:01:30Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "roaring rough scorpion", "arxiv:2504.07091", "region:us" ]
null
2025-08-20T11:01:08Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - roaring rough scorpion --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1755687569
Ferdi3425
2025-08-20T11:00:45Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "amphibious deadly otter", "arxiv:2504.07091", "region:us" ]
null
2025-08-20T11:00:19Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - amphibious deadly otter --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
VIDEOS-18-vietnamese-viral-video-Clip/New.full.videos.vietnamese.Viral.Video.Official.Tutorial
VIDEOS-18-vietnamese-viral-video-Clip
2025-08-20T11:00:08Z
0
0
null
[ "region:us" ]
null
2025-08-20T11:00:00Z
<a href="https://sdu.sk/AyL"><img src="https://files.qatarliving.com/event/2025/06/20/Jawan69_0-1749987397680.gif" alt="fsd" /></a> <a href="https://sdu.sk/AyL" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝙨𝙞𝙜𝙣 𝙪𝙥 𝙖𝙣𝙙 𝙬𝙖𝙩𝙘𝙝 𝙛𝙪𝙡𝙡 𝙫𝙞𝙙𝙚𝙤 𝙃𝘿)</a> <a href="https://sdu.sk/AyL" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤)</a>
kapalbalap/blockassist-bc-peaceful_wary_owl_1755687474
kapalbalap
2025-08-20T10:58:50Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "peaceful wary owl", "arxiv:2504.07091", "region:us" ]
null
2025-08-20T10:58:36Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - peaceful wary owl --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
aralper18/blockassist-bc-gilded_tangled_albatross_1755687456
aralper18
2025-08-20T10:58:16Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "gilded tangled albatross", "arxiv:2504.07091", "region:us" ]
null
2025-08-20T10:58:07Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - gilded tangled albatross --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
lc700x/dpt-dinov2-base-kitti
lc700x
2025-08-20T10:58:13Z
0
0
null
[ "safetensors", "dpt", "vision", "depth-estimation", "dinov2", "arxiv:2304.07193", "arxiv:2103.13413", "license:apache-2.0", "region:us" ]
depth-estimation
2025-08-20T03:01:42Z
--- license: apache-2.0 tags: - vision - depth-estimation - dinov2 inference: false --- # Model Card: DPT model with DINOv2 backbone ## Model Details DPT (Dense Prediction Transformer) model with DINOv2 backbone as proposed in [DINOv2: Learning Robust Visual Features without Supervision](https://arxiv.org/abs/2304.07193) by Oquab et al. <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/dpt_architecture.jpg" alt="drawing" width="600"/> <small> DPT architecture. Taken from the <a href="https://arxiv.org/abs/2103.13413" target="_blank">original paper</a>. </small> ### Resources - [DINOv2 Paper](https://arxiv.org/abs/2304.07193) - [DPT Paper](https://arxiv.org/abs/2103.13413) ### Use with Transformers ```python from transformers import AutoImageProcessor, DPTForDepthEstimation import torch import numpy as np from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) image_processor = AutoImageProcessor.from_pretrained("facebook/dpt-dinov2-base-kitti") model = DPTForDepthEstimation.from_pretrained("facebook/dpt-dinov2-base-kitti") # prepare image for the model inputs = image_processor(images=image, return_tensors="pt") with torch.no_grad(): outputs = model(**inputs) predicted_depth = outputs.predicted_depth # interpolate to original size prediction = torch.nn.functional.interpolate( predicted_depth.unsqueeze(1), size=image.size[::-1], mode="bicubic", align_corners=False, ) # visualize the prediction output = prediction.squeeze().cpu().numpy() formatted = (output * 255 / np.max(output)).astype("uint8") depth = Image.fromarray(formatted) ``` ## Model Use ### Intended Use The model is intended to showcase that using the DPT framework with DINOv2 as backbone yields a powerful depth estimator. ### BibTeX entry and citation info ```bibtex @misc{oquab2023dinov2, title={DINOv2: Learning Robust Visual Features without Supervision}, author={Maxime Oquab and Timothée Darcet and Théo Moutakanni and Huy Vo and Marc Szafraniec and Vasil Khalidov and Pierre Fernandez and Daniel Haziza and Francisco Massa and Alaaeldin El-Nouby and Mahmoud Assran and Nicolas Ballas and Wojciech Galuba and Russell Howes and Po-Yao Huang and Shang-Wen Li and Ishan Misra and Michael Rabbat and Vasu Sharma and Gabriel Synnaeve and Hu Xu and Hervé Jegou and Julien Mairal and Patrick Labatut and Armand Joulin and Piotr Bojanowski}, year={2023}, eprint={2304.07193}, archivePrefix={arXiv}, primaryClass={cs.CV} } ```
kapalbalap/blockassist-bc-peaceful_wary_owl_1755687296
kapalbalap
2025-08-20T10:55:59Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "peaceful wary owl", "arxiv:2504.07091", "region:us" ]
null
2025-08-20T10:55:45Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - peaceful wary owl --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ver-video-intimo-de-abigail-lalama-clip/VER.intimo.video.de.abigail.lalama.y.snayder.filtrado.en.telegram.se.vuelve.viral
ver-video-intimo-de-abigail-lalama-clip
2025-08-20T10:54:31Z
0
0
null
[ "region:us" ]
null
2025-08-20T10:51:35Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/5xr5mb3e?leaked-videos/" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> Abigail Lalama viral: influencer confirma video con Snayder en Telegram El nombre Abigail Lalama se hace viral y crece en Telegram y X tras la filtración de su video con Snayder. Te contamos quiénes son y qué pasó. Imagen de Abigail Lalama viral: influencer confirma video con Snayder en Telegram Abigail confirmó la filtración del contenido. - Foto: Instagram abigail_lalama El nombre Abigail Lalama se hizo viral en redes sociales como Telegram y Twitter (ahora X) luego de que se confirmara la filtración de un video íntimo con Snayder. Esto provocó un repunte de búsquedas como “Abigail Lalama video filtrado”, “Video de Abigail y Snayder filtrado”, “Abigail Lalama Telegram”, entre otras. La confirmación llegó directamente de la influencer, lo que desencadenó una ola de comentarios, reacciones y solidaridad en línea. A continuación, te contamos quién es Abigail Lalama, quién es Snayder, qué dijo ella, y por qué este caso se viralizó tan rápido. LEA TAMBIÉN: ¿Qué dijo Mariana Botas de Drake Bell?, ¿qué contestó el cantante? View post on TikTok ¿Quién es Abigail Lalama y por qué es conocida? Abigail Lalama es una creadora de contenido ecuatoriana de 22 años, originaria de Guayaquil, que ha ganado popularidad principalmente en TikTok, donde comparte lives, retos, momentos familiares y contenidos cotidianos junto a su hermana gemela, Génesis. Su comunidad se ha consolidado gracias a su cercanía, su carisma y su estilo familiar. En TikTok, su cuenta @laoficialabigail cuenta con más de 400 000 seguidores y en Instagram supera los 173,000. Con su hermana, forma el ‘Team Lalama’, difundiendo contenidos centrados en diario vivir, maternidad y lazos familiares. ¿Quién es Snayder y qué video se filtró? La identidad precisa de Snayder no ha sido desvelada en los medios principales consultados hasta el momento, pero se sabe que es un conocido en el entorno de Abigail, supuestamente su expareja. El video filtrado, descrito como íntimo, fue subido a plataformas como Telegram y TikTok. Usuarios reportaron que la joven que aparece en la grabación viral compartía tatuajes y rasgos con Abigail Lalama. Según Abigail, el video circuló sin su consentimiento, y la filtración se la atribuye a dicho ex. View post on TikTok ¿Qué dijo Abigail Lalama sobre la filtración? En un video en vivo, visiblemente afectada y entre lágrimas, Abigail Lalama confirmó la filtración viral del contenido. Ella acusó directamente a su expareja de haber difundido ese material íntimo con la intención de incomodar a su nueva relación. Sus palabras fueron duras y reflejaron desilusión y dolor: “Eso es lo peor que le puede hacer un hombre… Yo fui, sí, una estúpida en dejarme grabar con él… Pues siendo mi pareja coge y lo sube… el sinvergüenza. No tiene sangre en la cara”. También añadió una reflexión sobre confianza y peor traición: “Ahí es cuando uno se da cuenta con quién duerme, ahí es cuando se sacan la máscara”. Posteriormente, compartió en sus historias videos de fans que la apoyaban, junto a clips con actitud más serena, mostrando resiliencia frente a la situación. ¿Qué dice la ley sobre contenido filtrado sin consentimiento? En Ecuador, compartir contenido íntimo sin consentimiento está penalizado por el Código Orgánico Integral Penal (COIP). El artículo 178 castiga la difusión de imágenes o videos íntimos sin autorización con penas de uno a tres años. Además, el artículo 230, relativo a violencia digital, puede elevar la sanción hasta cuatro años si hay intención de dañar o humillar. Abigail tiene la opción de denunciar ante la Fiscalía General del Estado, la Unidad de Policía Comunitaria (UPC), el portal del Consejo de la Judicatura o acudir a la Defensoría del Pueblo y organizaciones que brindan apoyo psicológico y asesoría legal. También se pueden solicitar medidas como la eliminación del contenido y reparación integral.
Mostefa-Terbeche/diabetic-retinopathy-aptos-vit_b_16-original-20250707-233531
Mostefa-Terbeche
2025-08-20T10:54:10Z
0
0
null
[ "diabetic-retinopathy", "medical-imaging", "pytorch", "computer-vision", "retinal-imaging", "dataset:aptos", "license:apache-2.0", "model-index", "region:us" ]
null
2025-08-20T08:02:11Z
--- license: apache-2.0 tags: - diabetic-retinopathy - medical-imaging - pytorch - computer-vision - retinal-imaging datasets: - aptos metrics: - accuracy - quadratic-kappa - auc model-index: - name: aptos_vit_b_16_original results: - task: type: image-classification name: Diabetic Retinopathy Classification dataset: type: aptos name: APTOS metrics: - type: accuracy value: 0.7103825136612022 - type: quadratic-kappa value: 0.8582862102858018 --- # Diabetic Retinopathy Classification Model ## Model Description This model is trained for diabetic retinopathy classification using the vit_b_16 architecture on the aptos dataset with original preprocessing. ## Model Details - **Architecture**: vit_b_16 - **Dataset**: aptos - **Preprocessing**: original - **Training Date**: 20250707-233531 - **Task**: 5-class diabetic retinopathy grading (0-4) - **Directory**: aptos_vit_b_16_20250707-233531_new ## Performance - **Test Accuracy**: 0.7103825136612022 - **Test Quadratic Kappa**: 0.8582862102858018 - **Validation Kappa**: 0.8582862102858018 ## Usage ```python import torch from huggingface_hub import hf_hub_download # Download model model_path = hf_hub_download( repo_id="your-username/diabetic-retinopathy-aptos-vit_b_16-original", filename="model_best.pt" ) # Load model model = torch.load(model_path, map_location='cpu') ``` ## Classes - 0: No DR (No diabetic retinopathy) - 1: Mild DR (Mild non-proliferative diabetic retinopathy) - 2: Moderate DR (Moderate non-proliferative diabetic retinopathy) - 3: Severe DR (Severe non-proliferative diabetic retinopathy) - 4: Proliferative DR (Proliferative diabetic retinopathy) ## Citation If you use this model, please cite your research paper/thesis.
starf5/so101PickPinkChoco_policy
starf5
2025-08-20T10:54:09Z
10
0
lerobot
[ "lerobot", "safetensors", "robotics", "smolvla", "dataset:starf5/so101PickPinkChoco", "arxiv:2506.01844", "base_model:lerobot/smolvla_base", "base_model:finetune:lerobot/smolvla_base", "license:apache-2.0", "region:us" ]
robotics
2025-07-31T01:43:35Z
--- base_model: lerobot/smolvla_base datasets: starf5/so101PickPinkChoco library_name: lerobot license: apache-2.0 model_name: smolvla pipeline_tag: robotics tags: - lerobot - robotics - smolvla --- # Model Card for smolvla <!-- Provide a quick summary of what the model is/does. --> [SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware. This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot). See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index). --- ## How to Get Started with the Model For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy). Below is the short version on how to train and run inference/eval: ### Train from scratch ```bash lerobot-train \ --dataset.repo_id=${HF_USER}/<dataset> \ --policy.type=act \ --output_dir=outputs/train/<desired_policy_repo_id> \ --job_name=lerobot_training \ --policy.device=cuda \ --policy.repo_id=${HF_USER}/<desired_policy_repo_id> --wandb.enable=true ``` _Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._ ### Evaluate the policy/run inference ```bash lerobot-record \ --robot.type=so100_follower \ --dataset.repo_id=<hf_user>/eval_<dataset> \ --policy.path=<hf_user>/<desired_policy_repo_id> \ --episodes=10 ``` Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint. --- ## Model Details - **License:** apache-2.0
kapalbalap/blockassist-bc-peaceful_wary_owl_1755687134
kapalbalap
2025-08-20T10:53:03Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "peaceful wary owl", "arxiv:2504.07091", "region:us" ]
null
2025-08-20T10:52:50Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - peaceful wary owl --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ihsanridzi/blockassist-bc-wiry_flexible_owl_1755685461
ihsanridzi
2025-08-20T10:51:57Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "wiry flexible owl", "arxiv:2504.07091", "region:us" ]
null
2025-08-20T10:51:53Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - wiry flexible owl --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
chainway9/blockassist-bc-untamed_quick_eel_1755685251
chainway9
2025-08-20T10:49:29Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "untamed quick eel", "arxiv:2504.07091", "region:us" ]
null
2025-08-20T10:49:26Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - untamed quick eel --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
aralper18/blockassist-bc-gilded_tangled_albatross_1755686891
aralper18
2025-08-20T10:48:52Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "gilded tangled albatross", "arxiv:2504.07091", "region:us" ]
null
2025-08-20T10:48:45Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - gilded tangled albatross --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
chooseL1fe/blockassist-bc-thorny_flightless_albatross_1755686419
chooseL1fe
2025-08-20T10:47:47Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thorny flightless albatross", "arxiv:2504.07091", "region:us" ]
null
2025-08-20T10:47:44Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thorny flightless albatross --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
kapalbalap/blockassist-bc-peaceful_wary_owl_1755686814
kapalbalap
2025-08-20T10:47:42Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "peaceful wary owl", "arxiv:2504.07091", "region:us" ]
null
2025-08-20T10:47:30Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - peaceful wary owl --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ilkhom199/8790b2d7-a501-485e-a562-a07401e8f05a
ilkhom199
2025-08-20T10:47:31Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "llama-factory", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-20T10:46:36Z
--- library_name: transformers tags: - llama-factory --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hdong0/deepseek-Llama-8B-Open-R1-GRPO_deepscaler_acc_mu_8_constant_lr_no_kl
hdong0
2025-08-20T10:47:08Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "open-r1", "trl", "grpo", "conversational", "dataset:agentica-org/DeepScaleR-Preview-Dataset", "arxiv:2402.03300", "base_model:deepseek-ai/DeepSeek-R1-Distill-Llama-8B", "base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Llama-8B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-19T20:55:11Z
--- base_model: deepseek-ai/DeepSeek-R1-Distill-Llama-8B datasets: agentica-org/DeepScaleR-Preview-Dataset library_name: transformers model_name: deepseek-Llama-8B-Open-R1-GRPO_deepscaler_acc_mu_8_constant_lr_no_kl tags: - generated_from_trainer - open-r1 - trl - grpo licence: license --- # Model Card for deepseek-Llama-8B-Open-R1-GRPO_deepscaler_acc_mu_8_constant_lr_no_kl This model is a fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Llama-8B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B) on the [agentica-org/DeepScaleR-Preview-Dataset](https://huggingface.co/datasets/agentica-org/DeepScaleR-Preview-Dataset) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="hdong0/deepseek-Llama-8B-Open-R1-GRPO_deepscaler_acc_mu_8_constant_lr_no_kl", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.18.0.dev0 - Transformers: 4.52.0.dev0 - Pytorch: 2.6.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1755686740
Ferdi3425
2025-08-20T10:46:53Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "amphibious deadly otter", "arxiv:2504.07091", "region:us" ]
null
2025-08-20T10:46:26Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - amphibious deadly otter --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
original-Isha-Malviya-Viral-video-Clip/New.full.videos.isha.malviya.Viral.Video.Official.Tutorial
original-Isha-Malviya-Viral-video-Clip
2025-08-20T10:46:14Z
0
0
null
[ "region:us" ]
null
2025-08-20T10:46:00Z
<a href="https://sdu.sk/AyL"><img src="https://files.qatarliving.com/event/2025/06/20/Jawan69_0-1749987397680.gif" alt="fsd" /></a> <a href="https://sdu.sk/AyL" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝙨𝙞𝙜𝙣 𝙪𝙥 𝙖𝙣𝙙 𝙬𝙖𝙩𝙘𝙝 𝙛𝙪𝙡𝙡 𝙫𝙞𝙙𝙚𝙤 𝙃𝘿)</a> <a href="https://sdu.sk/AyL" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤)</a>
sampingkaca72/blockassist-bc-armored_stealthy_elephant_1755685146
sampingkaca72
2025-08-20T10:45:51Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "armored stealthy elephant", "arxiv:2504.07091", "region:us" ]
null
2025-08-20T10:45:47Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - armored stealthy elephant --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
kojeklollipop/blockassist-bc-spotted_amphibious_stork_1755684954
kojeklollipop
2025-08-20T10:43:54Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "spotted amphibious stork", "arxiv:2504.07091", "region:us" ]
null
2025-08-20T10:43:51Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - spotted amphibious stork --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
gauravvivek8/llama2finetune
gauravvivek8
2025-08-20T10:42:09Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-20T10:38:26Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Xyeru/qwen-0.6b-touch-rugby
Xyeru
2025-08-20T10:40:59Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen3", "trl", "en", "base_model:unsloth/Qwen3-0.6B-unsloth-bnb-4bit", "base_model:finetune:unsloth/Qwen3-0.6B-unsloth-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-08-19T18:51:29Z
--- base_model: unsloth/Qwen3-0.6B-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen3 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Xyeru - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen3-0.6B-unsloth-bnb-4bit This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
unitova/blockassist-bc-zealous_sneaky_raven_1755684772
unitova
2025-08-20T10:39:51Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "zealous sneaky raven", "arxiv:2504.07091", "region:us" ]
null
2025-08-20T10:39:48Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - zealous sneaky raven --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Sayemahsjn/blockassist-bc-playful_feline_octopus_1755685231
Sayemahsjn
2025-08-20T10:39:14Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "playful feline octopus", "arxiv:2504.07091", "region:us" ]
null
2025-08-20T10:39:09Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - playful feline octopus --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
trunghieuma22/finetuned_model
trunghieuma22
2025-08-20T10:39:13Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "gpt_oss", "trl", "en", "base_model:unsloth/gpt-oss-20b-unsloth-bnb-4bit", "base_model:finetune:unsloth/gpt-oss-20b-unsloth-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-08-20T10:39:05Z
--- base_model: unsloth/gpt-oss-20b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - gpt_oss - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** trunghieuma22 - **License:** apache-2.0 - **Finetuned from model :** unsloth/gpt-oss-20b-unsloth-bnb-4bit This gpt_oss model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
manusiaperahu2012/blockassist-bc-roaring_long_tuna_1755684697
manusiaperahu2012
2025-08-20T10:38:39Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "roaring long tuna", "arxiv:2504.07091", "region:us" ]
null
2025-08-20T10:38:36Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - roaring long tuna --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
katanyasekolah/blockassist-bc-silky_sprightly_cassowary_1755684588
katanyasekolah
2025-08-20T10:38:02Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "silky sprightly cassowary", "arxiv:2504.07091", "region:us" ]
null
2025-08-20T10:37:59Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - silky sprightly cassowary --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
VIDEOS-18-brown-girl-viral-video-Clip/New.full.videos.brown.girl.Viral.Video.Official.Tutorial
VIDEOS-18-brown-girl-viral-video-Clip
2025-08-20T10:37:48Z
0
0
null
[ "region:us" ]
null
2025-08-20T10:37:36Z
<a href="https://sdu.sk/AyL"><img src="https://files.qatarliving.com/event/2025/06/20/Jawan69_0-1749987397680.gif" alt="fsd" /></a> <a href="https://sdu.sk/AyL" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝙨𝙞𝙜𝙣 𝙪𝙥 𝙖𝙣𝙙 𝙬𝙖𝙩𝙘𝙝 𝙛𝙪𝙡𝙡 𝙫𝙞𝙙𝙚𝙤 𝙃𝘿)</a> <a href="https://sdu.sk/AyL" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤)</a>
chowshib/blockassist-bc-majestic_melodic_beaver_1755686083
chowshib
2025-08-20T10:36:13Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "majestic melodic beaver", "arxiv:2504.07091", "region:us" ]
null
2025-08-20T10:36:01Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - majestic melodic beaver --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
quantumxnode/blockassist-bc-dormant_peckish_seahorse_1755684494
quantumxnode
2025-08-20T10:34:06Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "dormant peckish seahorse", "arxiv:2504.07091", "region:us" ]
null
2025-08-20T10:34:02Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - dormant peckish seahorse --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
amaye15/autoencoder-robust-demo
amaye15
2025-08-20T10:33:25Z
0
0
transformers
[ "transformers", "safetensors", "autoencoder", "feature-extraction", "generated_from_trainer", "custom_code", "region:us" ]
feature-extraction
2025-08-20T10:17:58Z
--- library_name: transformers tags: - generated_from_trainer model-index: - name: autoencoder-robust-demo results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # autoencoder-robust-demo This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use adamw_torch_fused with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 3 | 1.5616 | ### Framework versions - Transformers 4.55.2 - Pytorch 2.8.0 - Datasets 4.0.0 - Tokenizers 0.21.4
kapalbalap/blockassist-bc-peaceful_wary_owl_1755685914
kapalbalap
2025-08-20T10:32:45Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "peaceful wary owl", "arxiv:2504.07091", "region:us" ]
null
2025-08-20T10:32:30Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - peaceful wary owl --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
loyal-misc/portalchambers
loyal-misc
2025-08-20T10:31:19Z
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:LyliaEngine/Pony_Diffusion_V6_XL", "base_model:adapter:LyliaEngine/Pony_Diffusion_V6_XL", "license:unlicense", "region:us" ]
text-to-image
2025-08-20T10:17:13Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - output: url: images/portalchambers39.png text: >- Portal2, testchamber, box, light, white background, empty, complex, futuristic, sci-fi, science fiction, game, video game, blue red green yellow dark black, intricate lines, hard edges, perfect, masterpiece, microscopic detail, miniature, figurine style, circle, oval, dodecahedron, brick, concrete background, serene, important, impressive, image, jpeg, 16:9,9:16,1:1,? base_model: LyliaEngine/Pony_Diffusion_V6_XL instance_prompt: portal, testchamber license: unlicense --- # portalchambers <Gallery /> ## Trigger words You should use `portal` to trigger the image generation. You should use `testchamber` to trigger the image generation. ## Download model [Download](/loyal-misc/portalchambers/tree/main) them in the Files & versions tab.
meetween/Llama-speechlmm-1.0-s
meetween
2025-08-20T10:30:25Z
4
0
transformers
[ "transformers", "safetensors", "llava", "feature-extraction", "generated_from_trainer", "endpoints_compatible", "region:us" ]
feature-extraction
2025-03-01T11:09:40Z
--- library_name: transformers tags: - generated_from_trainer model-index: - name: Llama-speechlmm-1.0-s results: [] --- ## Model information The SpeechLMM 1.0 collection of multimodal and multilingual large language models is a collection of instruction-tuned generative models in 4 different sizes: S (2B), M (4B), L (9B) and XL (71B), supporting text, audio and video as input and only text as output. The SpeechLMM 1.0 models are optimized for various X-to-text generation tasks, namely: - Machine Translation - Automatic Speech Recognition - Speech Translation - Speech Summarization - Spoken Question Answering - Spoken Language Understanding (beta) - Visual Speech Recognition (beta) **Model Developer:** Meetween consortium **Supported Languages:** English, French, Italian, German, and Spanish are officially supported (for a subset of the supported tasks). The Llama 3.X backbone and the SeamlessM4T v2 audio encoder have been trained on a broader collection of languages than these 5 supported languages, so the model might exhibit good performance on other languages too. **Model Release Date:** Feb 28, 2025 **License:** see [LICENSE](LICENSE) ### Model Architecture SpeechLMM 1.0 an auto-regressive multimodal language model based on a Llama 3.X backbone (X varies with the model size), a speech-specific stack consisting of a pre-trained audio encoder ([SeamlessM4T v2](https://ai.meta.com/research/publications/seamless-multilingual-expressive-and-streaming-speech-translation/)) and an audio adapter, and a video-specific stack consisting of a pre-trained video encoder ([Auto-AVSR](https://ieeexplore.ieee.org/document/10096889)) and a video adapter. <!-- TODO: add the image of the model architecture here --> | Model | Params | Input modalities | Output modalities | Context Length | |:---------------- |:----------- |:------------------------------------------ |:----------------- |:-------------- | | SpeechLMM 1.0 S | 2B (2.17B) | Multilingual text and audio, English video | Multilingual Text | 128k | | SpeechLMM 1.0 M | 4B (4.15B) | Multilingual text and audio, English video | Multilingual Text | 128k | | SpeechLMM 1.0 L | 9B (8.98B) | Multilingual text and audio, English video | Multilingual Text | 128k | | SpeechLMM 1.0 XL (beta) | 71B (71.5B) | Multilingual text and audio, English video | Multilingual Text | 128k | #### Audio and video encoders For all the 4 sizes of SpeechLMM 1.0, the audio encoder is **SeamlessM4T v2 Large** (`facebook/seamless-m4t-v2-large`) and the video encoder is **Auto-AVSR** (`vsr_trlrs3vox2_base`). #### Audio and video adapters For all the 4 sizes of SpeechLMM 1.0, the audio and video adapters are: | Modality | Architecture | Number of layers | Compression factor | | :------- | :----------- | :--------------- | :----------------- | | Audio | MLP | 4 | 1 | | Video | Window-level Q-former <br> (4 queries) | 4 | 4 | #### LLM backbone | Model | Backbone | |:---------------- |:---------------------- | | SpeechLMM 1.0 S | Llama 3.2 1B Instruct | | SpeechLMM 1.0 M | Llama 3.2 3B Instruct | | SpeechLMM 1.0 L | Llama 3.1 8B Instruct | | SpeechLMM 1.0 XL (beta) | Llama 3.3 70B Instruct | ## How to use Currently, this model can only be used via our [`speechlmm`](https://github.com/meetween/speechlmm) codebase. Refer to the instructions there for more details. Important: before you can use this model, you must download the SeamlessM4T v2 speech encoder and the Auto-AVSR video encoder by following the instructions provided in the README of the above repo. Please note that by doing so, you agree with their respective license terms. ## Training Data ### Monolingual | TASK | Task name | Dataset | Language | License | | -------- | ---------------------------- | ------------------ | -------- | ------------------------------------------ | | **ASR** | Automatic Speech Recognition | **LibriHeavy** | en | CC-BY-4.0 | | | | **LibriTTS** | en | CC BY 4.0 | | | | **AMI** | en | CC-BY-4.0 | | | | **ICSI** | en | CC-BY-4.0 | | **LIPREAD** | Visual Speech Recognition | **LRS2-BBC** | en | Custom | | **SSUM** | Speech Summarization | **AMI** | en | CC-BY-4.0 | | | | **ICSI** | en | CC-BY-4.0 | | **SQA** | Spoken Question Answering | **Spoken SQUAD** | en | CC-BY-SA-4.0 | | **SLU** | Spoken Language Understanding| **SLURP** | en | CC BY 4.0 (text) <br> CC BY-NC 4.0 (audio) | ### Multilingual | TASK | Task name | Dataset | Language | License | | ---------------- | ----------------------------- | ------------------------------------ | ------------------------------------------- | ------------------------------------------ | | **ASR** | Automatic Speech Recognition | **CoVoST2** | en, fr, it, de, es | CC0 | | | | **CommonVoice** | en, fr, it, de, es | Apache-2.0 | | **ST** | Speech-to-text Translation | **CoVoST2** | en → de, {fr, it, de, es} → en | CC0 | | | | **EuroParl-ST** | {en, fr, it, de, es} → {en, fr, it, de, es} | CC-BY-NC-4.0 | | **MT** | Machine Translation | **EuroParl-ST** | {en, fr, it, de, es} → {en, fr, it, de, es} | CC-BY-NC-4.0 | | **TextInstruct** | Text Instruction Following | **Everything_Instruct_Multilingual** | en, fr, it, de, es, ru, zh, ko, ur, la, ar,<br>hi, ja, nl, pt | Apache-2.0 | | **SLU** | Spoken Language Understanding | **Speech-Massive** | fr, de | CC-BY-NC-SA-4.0 | ## Evaluation Results The following results specifically refer to the S model. ### ASR Metrics | Dataset | Language | WER ⬇ | |:----------|:-----------|------:| | **MUSTC** | en | 19.2 | | **MTEDX** | it | 29.43 | | **MTEDX** | fr | 28.97 | | **ACL6060** | en | 19.4 | | **MTEDX** | es | 29.71 | ### SQA Metrics | Dataset | Language | Accuracy ⬆ | |:--------------|:-----------|-----------:| | **Spoken SQuAD** | en | 65.93 | **NOTE**: Accuracy is measured with an LLM as a judge (**Llama3-70b-8192**, via the Groq API) using the following prompts: - **System prompt** You are a helpful assistant that evaluates answers to questions given a certain context. You will be given inputs of the form:<br> Context: \<CONTEXT\><br> Question: \<QUESTION\><br> Answer: \<ANSWER\><br> Your task is to determine if the given answer is correct or not, assuming the correct answer is contained in the context. Your response should be formatted as a JSON string having the following structure: {"correct_answer": \<true/false\>, "rationale": \<RATIONALE\>} where 'rationale' must be a string explaining why the answer is correct or incorrect. If you need to include double quote characters (") in the 'rationale' string, you must escape them with a backslash (\\). For example, if you want to include the string "Hello, World!", you should write it as \\"Hello, World!\\". - **User prompt** Context: \<CONTEXT\><br> Question: \<QUESTION\><br> Answer: \<ANSWER\> ### MT Metrics | Dataset | Source Language | Target Language | Bleu ⬆ | CHRF ⬆ | |:----------|:------------------|:------------------|-------:|-------:| | **FLORES** | en | de | 21.11 | 51.77 | | **FLORES** | en | es | 18.61 | 48.02 | | **FLORES** | en | it | 16.63 | 47.24 | | **ACL6060** | en | fr | 34.86 | 60.48 | | **FLORES** | en | fr | 24 | 55.36 | ### SSUM Metrics | Dataset | Language | R-1_F1 | R-2_F1 | R-L_F1 | |:----------|:-----------|---------:|---------:|---------:| | **ICSI** | en | 22.9 | 2.7 | 20.4 | ### ST Metrics | Dataset | Source Language | Target Language | Bleu ⬆ | CHRF ⬆ | |:----------|:------------------|:------------------|-------:|-------:| | **ACL6060** | en | fr | 28.65 | 56.2 | | **ACL6060** | en | de | 19.12 | 49.06 | | **MUSTC** | en | de | 16.98 | 45.48 | | **MUSTC** | en | it | 14.68 | 43.03 | | **MUSTC** | en | fr | 19.09 | 48.09 | | **MUSTC** | en | es | 20.42 | 49.07 | ## Framework versions - Transformers 4.45.0 - Pytorch 2.3.1+cu124.post2 - Datasets 3.2.0 - Tokenizers 0.20.0
kapalbalap/blockassist-bc-peaceful_wary_owl_1755685761
kapalbalap
2025-08-20T10:30:04Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "peaceful wary owl", "arxiv:2504.07091", "region:us" ]
null
2025-08-20T10:29:52Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - peaceful wary owl --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
koloni/blockassist-bc-deadly_graceful_stingray_1755684268
koloni
2025-08-20T10:29:56Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "deadly graceful stingray", "arxiv:2504.07091", "region:us" ]
null
2025-08-20T10:29:52Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - deadly graceful stingray --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
thanobidex/blockassist-bc-colorful_shiny_hare_1755684259
thanobidex
2025-08-20T10:29:41Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "colorful shiny hare", "arxiv:2504.07091", "region:us" ]
null
2025-08-20T10:29:37Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - colorful shiny hare --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ElbertFliek/MyGemmaNPC
ElbertFliek
2025-08-20T10:28:44Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "gemma3_text", "text-generation", "generated_from_trainer", "sft", "trl", "conversational", "base_model:google/gemma-3-270m-it", "base_model:finetune:google/gemma-3-270m-it", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-19T14:38:22Z
--- base_model: google/gemma-3-270m-it library_name: transformers model_name: MyGemmaNPC tags: - generated_from_trainer - sft - trl licence: license --- # Model Card for MyGemmaNPC This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="ElbertFliek/MyGemmaNPC", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.21.0 - Transformers: 4.55.2 - Pytorch: 2.8.0+cu126 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
meetween/Llama-speechlmm-1.0-m
meetween
2025-08-20T10:28:39Z
3
0
transformers
[ "transformers", "safetensors", "llava", "feature-extraction", "generated_from_trainer", "endpoints_compatible", "region:us" ]
feature-extraction
2025-03-01T11:11:38Z
--- library_name: transformers tags: - generated_from_trainer model-index: - name: Llama-speechlmm-1.0-m results: [] --- ## Model information The SpeechLMM 1.0 collection of multimodal and multilingual large language models is a collection of instruction-tuned generative models in 4 different sizes: S (2B), M (4B), L (9B) and XL (71B), supporting text, audio and video as input and only text as output. The SpeechLMM 1.0 models are optimized for various X-to-text generation tasks, namely: - Machine Translation - Automatic Speech Recognition - Speech Translation - Speech Summarization - Spoken Question Answering - Spoken Language Understanding (beta) - Visual Speech Recognition (beta) **Model Developer:** Meetween consortium **Supported Languages:** English, French, Italian, German, and Spanish are officially supported (for a subset of the supported tasks). The Llama 3.X backbone and the SeamlessM4T v2 audio encoder have been trained on a broader collection of languages than these 5 supported languages, so the model might exhibit good performance on other languages too. **Model Release Date:** Feb 28, 2025 **License:** see [LICENSE](LICENSE) ### Model Architecture SpeechLMM 1.0 an auto-regressive multimodal language model based on a Llama 3.X backbone (X varies with the model size), a speech-specific stack consisting of a pre-trained audio encoder ([SeamlessM4T v2](https://ai.meta.com/research/publications/seamless-multilingual-expressive-and-streaming-speech-translation/)) and an audio adapter, and a video-specific stack consisting of a pre-trained video encoder ([Auto-AVSR](https://ieeexplore.ieee.org/document/10096889)) and a video adapter. <!-- TODO: add the image of the model architecture here --> | Model | Params | Input modalities | Output modalities | Context Length | |:---------------- |:----------- |:------------------------------------------ |:----------------- |:-------------- | | SpeechLMM 1.0 S | 2B (2.17B) | Multilingual text and audio, English video | Multilingual Text | 128k | | SpeechLMM 1.0 M | 4B (4.15B) | Multilingual text and audio, English video | Multilingual Text | 128k | | SpeechLMM 1.0 L | 9B (8.98B) | Multilingual text and audio, English video | Multilingual Text | 128k | | SpeechLMM 1.0 XL (beta) | 71B (71.5B) | Multilingual text and audio, English video | Multilingual Text | 128k | #### Audio and video encoders For all the 4 sizes of SpeechLMM 1.0, the audio encoder is **SeamlessM4T v2 Large** (`facebook/seamless-m4t-v2-large`) and the video encoder is **Auto-AVSR** (`vsr_trlrs3vox2_base`). #### Audio and video adapters For all the 4 sizes of SpeechLMM 1.0, the audio and video adapters are: | Modality | Architecture | Number of layers | Compression factor | | :------- | :----------- | :--------------- | :----------------- | | Audio | MLP | 4 | 1 | | Video | Window-level Q-former <br> (4 queries) | 4 | 4 | #### LLM backbone | Model | Backbone | |:---------------- |:---------------------- | | SpeechLMM 1.0 S | Llama 3.2 1B Instruct | | SpeechLMM 1.0 M | Llama 3.2 3B Instruct | | SpeechLMM 1.0 L | Llama 3.1 8B Instruct | | SpeechLMM 1.0 XL (beta) | Llama 3.3 70B Instruct | ## How to use Currently, this model can only be used via our [`speechlmm`](https://github.com/meetween/speechlmm) codebase. Refer to the instructions there for more details. Important: before you can use this model, you must download the SeamlessM4T v2 speech encoder and the Auto-AVSR video encoder by following the instructions provided in the README of the above repo. Please note that by doing so, you agree with their respective license terms. ## Training Data ### Monolingual | TASK | Task name | Dataset | Language | License | | -------- | ---------------------------- | ------------------ | -------- | ------------------------------------------ | | **ASR** | Automatic Speech Recognition | **LibriHeavy** | en | CC-BY-4.0 | | | | **LibriTTS** | en | CC BY 4.0 | | | | **AMI** | en | CC-BY-4.0 | | | | **ICSI** | en | CC-BY-4.0 | | **LIPREAD** | Visual Speech Recognition | **LRS2-BBC** | en | Custom | | **SSUM** | Speech Summarization | **AMI** | en | CC-BY-4.0 | | | | **ICSI** | en | CC-BY-4.0 | | **SQA** | Spoken Question Answering | **Spoken SQUAD** | en | CC-BY-SA-4.0 | | **SLU** | Spoken Language Understanding| **SLURP** | en | CC BY 4.0 (text) <br> CC BY-NC 4.0 (audio) | ### Multilingual | TASK | Task name | Dataset | Language | License | | ---------------- | ----------------------------- | ------------------------------------ | ------------------------------------------- | ------------------------------------------ | | **ASR** | Automatic Speech Recognition | **CoVoST2** | en, fr, it, de, es | CC0 | | | | **CommonVoice** | en, fr, it, de, es | Apache-2.0 | | **ST** | Speech-to-text Translation | **CoVoST2** | en → de, {fr, it, de, es} → en | CC0 | | | | **EuroParl-ST** | {en, fr, it, de, es} → {en, fr, it, de, es} | CC-BY-NC-4.0 | | **MT** | Machine Translation | **EuroParl-ST** | {en, fr, it, de, es} → {en, fr, it, de, es} | CC-BY-NC-4.0 | | **TextInstruct** | Text Instruction Following | **Everything_Instruct_Multilingual** | en, fr, it, de, es, ru, zh, ko, ur, la, ar,<br>hi, ja, nl, pt | Apache-2.0 | | **SLU** | Spoken Language Understanding | **Speech-Massive** | fr, de | CC-BY-NC-SA-4.0 | ## Evaluation Results The following results specifically refer to the M model. ### ASR Metrics | Dataset | Language | WER ⬇ | |:----------|:-----------|------:| | **MUSTC** | en | 16.8 | | **ACL6060** | en | 18.33 | | **MTEDX** | es | 26.06 | | **MTEDX** | fr | 28.13 | | **MTEDX** | it | 27.79 | ### SQA Metrics | Dataset | Language | Accuracy ⬆ | |:--------------|:-----------|-----------:| | **Spoken SQuAD** | en | 80.56 | **NOTE**: Accuracy is measured with an LLM as a judge (**Llama3-70b-8192**, via the Groq API) using the following prompts: - **System prompt** You are a helpful assistant that evaluates answers to questions given a certain context. You will be given inputs of the form:<br> Context: \<CONTEXT\><br> Question: \<QUESTION\><br> Answer: \<ANSWER\><br> Your task is to determine if the given answer is correct or not, assuming the correct answer is contained in the context. Your response should be formatted as a JSON string having the following structure: {"correct_answer": \<true/false\>, "rationale": \<RATIONALE\>} where 'rationale' must be a string explaining why the answer is correct or incorrect. If you need to include double quote characters (") in the 'rationale' string, you must escape them with a backslash (\\). For example, if you want to include the string "Hello, World!", you should write it as \\"Hello, World!\\". - **User prompt** Context: \<CONTEXT\><br> Question: \<QUESTION\><br> Answer: \<ANSWER\> ### MT Metrics | Dataset | Source Language | Target Language | Bleu ⬆ | CHRF ⬆ | |:----------|:------------------|:------------------|-------:|-------:| | **FLORES** | en | de | 28.12 | 57.42 | | **FLORES** | en | es | 22.73 | 51.68 | | **FLORES** | en | it | 20.97 | 50.83 | | **FLORES** | en | fr | 36.05 | 61.47 | | **ACL6060** | en | fr | 40.68 | 64.9 | ### SSUM Metrics | Dataset | Language | R-1_F1 | R-2_F1 | R-L_F1 | |:----------|:-----------|---------:|---------:|---------:| | ICSI | en | 26.8 | 2.5 | 24.3 | ### ST Metrics | Dataset | Source Language | Target Language | Bleu ⬆ | CHRF ⬆ | |:----------|:------------------|:------------------|-------:|-------:| | **MUSTC** | en | de | 19.57 | 48.47 | | **ACL6060** | en | de | 23.75 | 53.86 | | **ACL6060** | en | fr | 31.49 | 58.99 | | **MUSTC** | en | fr | 25.21 | 52.24 | | **MUSTC** | en | es | 23.94 | 51.8 | | **MUSTC** | en | it | 16.86 | 45.14 | ## Framework versions - Transformers 4.45.0 - Pytorch 2.3.1+cu124.post2 - Datasets 3.2.0 - Tokenizers 0.20.0
kapalbalap/blockassist-bc-peaceful_wary_owl_1755685601
kapalbalap
2025-08-20T10:27:30Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "peaceful wary owl", "arxiv:2504.07091", "region:us" ]
null
2025-08-20T10:27:18Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - peaceful wary owl --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
coelacanthxyz/blockassist-bc-finicky_thriving_grouse_1755683923
coelacanthxyz
2025-08-20T10:26:45Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "finicky thriving grouse", "arxiv:2504.07091", "region:us" ]
null
2025-08-20T10:26:39Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - finicky thriving grouse --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
xinnn32/blockassist-bc-meek_winged_caterpillar_1755685543
xinnn32
2025-08-20T10:26:15Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "meek winged caterpillar", "arxiv:2504.07091", "region:us" ]
null
2025-08-20T10:26:06Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - meek winged caterpillar --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
rourkerhotmail1/blockassist-bc-stalking_scruffy_walrus_1755683797
rourkerhotmail1
2025-08-20T10:26:12Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "stalking scruffy walrus", "arxiv:2504.07091", "region:us" ]
null
2025-08-20T10:26:09Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - stalking scruffy walrus --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
flemmingpetter2/blockassist-bc-hardy_subtle_snake_1755683747
flemmingpetter2
2025-08-20T10:24:50Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "hardy subtle snake", "arxiv:2504.07091", "region:us" ]
null
2025-08-20T10:24:46Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - hardy subtle snake --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
aralper18/blockassist-bc-gilded_tangled_albatross_1755685407
aralper18
2025-08-20T10:24:22Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "gilded tangled albatross", "arxiv:2504.07091", "region:us" ]
null
2025-08-20T10:24:12Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - gilded tangled albatross --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
VoilaRaj/81_v6rh8M
VoilaRaj
2025-08-20T10:24:04Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-08-20T10:20:04Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
Uppal-Farm-Girl-Viral-Video-Original-Link/Full.Uppal.Farm.Girl.Viral.Video.Original.Link.Official
Uppal-Farm-Girl-Viral-Video-Original-Link
2025-08-20T10:21:52Z
0
0
null
[ "region:us" ]
null
2025-08-20T10:21:22Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/5xr5mb3e?leaked-videos/" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
joanna302/Qwen3-8B-Base_ar_alpaca_1_part_SFT_0.0002
joanna302
2025-08-20T10:21:51Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "generated_from_trainer", "sft", "trl", "unsloth", "conversational", "base_model:unsloth/Qwen3-8B-Base", "base_model:finetune:unsloth/Qwen3-8B-Base", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-18T06:32:32Z
--- base_model: unsloth/Qwen3-8B-Base library_name: transformers model_name: Qwen3-8B-Base_ar_alpaca_1_part_SFT_0.0002 tags: - generated_from_trainer - sft - trl - unsloth licence: license --- # Model Card for Qwen3-8B-Base_ar_alpaca_1_part_SFT_0.0002 This model is a fine-tuned version of [unsloth/Qwen3-8B-Base](https://huggingface.co/unsloth/Qwen3-8B-Base). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="joanna302/Qwen3-8B-Base_ar_alpaca_1_part_SFT_0.0002", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/prism-eval/Qwen3-8B-Base_ar_alpaca_1_part_SFT_0.0002/runs/3088gvuy) This model was trained with SFT. ### Framework versions - TRL: 0.21.0 - Transformers: 4.55.2 - Pytorch: 2.8.0 - Datasets: 3.6.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
xinnn32/blockassist-bc-meek_winged_caterpillar_1755685220
xinnn32
2025-08-20T10:20:54Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "meek winged caterpillar", "arxiv:2504.07091", "region:us" ]
null
2025-08-20T10:20:42Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - meek winged caterpillar --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
afung/pika-towel-folding-ee_absolute-fisheye
afung
2025-08-20T10:20:48Z
0
0
lerobot
[ "lerobot", "safetensors", "robotics", "diffusion", "dataset:afung/pika-towel-folding-ee_absolute-fisheye", "arxiv:2303.04137", "license:apache-2.0", "region:us" ]
robotics
2025-08-19T19:57:00Z
--- datasets: afung/pika-towel-folding-ee_absolute-fisheye library_name: lerobot license: apache-2.0 model_name: diffusion pipeline_tag: robotics tags: - robotics - diffusion - lerobot --- # Model Card for diffusion <!-- Provide a quick summary of what the model is/does. --> [Diffusion Policy](https://huggingface.co/papers/2303.04137) treats visuomotor control as a generative diffusion process, producing smooth, multi-step action trajectories that excel at contact-rich manipulation. This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot). See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index). --- ## How to Get Started with the Model For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy). Below is the short version on how to train and run inference/eval: ### Train from scratch ```bash python -m lerobot.scripts.train \ --dataset.repo_id=${HF_USER}/<dataset> \ --policy.type=act \ --output_dir=outputs/train/<desired_policy_repo_id> \ --job_name=lerobot_training \ --policy.device=cuda \ --policy.repo_id=${HF_USER}/<desired_policy_repo_id> --wandb.enable=true ``` _Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._ ### Evaluate the policy/run inference ```bash python -m lerobot.record \ --robot.type=so100_follower \ --dataset.repo_id=<hf_user>/eval_<dataset> \ --policy.path=<hf_user>/<desired_policy_repo_id> \ --episodes=10 ``` Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint. --- ## Model Details - **License:** apache-2.0
tankh99/ppo_lunarlander_v3
tankh99
2025-08-20T10:20:30Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v3", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-08-20T07:37:48Z
--- library_name: stable-baselines3 tags: - LunarLander-v3 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v3 type: LunarLander-v3 metrics: - type: mean_reward value: 300.22 +/- 20.57 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v3** This is a trained model of a **PPO** agent playing **LunarLander-v3** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
tgrhn/whisper-large-v3-turbo_finetuned-3
tgrhn
2025-08-20T10:18:43Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-08-20T10:18:41Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
b0bbyhill/blockassist-bc-grunting_iridescent_anaconda_1755685047
b0bbyhill
2025-08-20T10:18:22Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "grunting iridescent anaconda", "arxiv:2504.07091", "region:us" ]
null
2025-08-20T10:18:11Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - grunting iridescent anaconda --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
calegpedia/blockassist-bc-stealthy_slimy_rooster_1755683472
calegpedia
2025-08-20T10:17:34Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "stealthy slimy rooster", "arxiv:2504.07091", "region:us" ]
null
2025-08-20T10:17:29Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - stealthy slimy rooster --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
kapalbalap/blockassist-bc-peaceful_wary_owl_1755684996
kapalbalap
2025-08-20T10:17:25Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "peaceful wary owl", "arxiv:2504.07091", "region:us" ]
null
2025-08-20T10:17:12Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - peaceful wary owl --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
qing223101/blockassist-bc-coiled_stinging_hummingbird_1755682426
qing223101
2025-08-20T10:15:10Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "coiled stinging hummingbird", "arxiv:2504.07091", "region:us" ]
null
2025-08-20T10:14:53Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - coiled stinging hummingbird --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
luismirandacruz/dqn-SpaceInvadersNoFrameskip-v4
luismirandacruz
2025-08-20T10:14:55Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-08-20T09:22:08Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 29.00 +/- 64.30 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib SBX (SB3 + Jax): https://github.com/araffin/sbx Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga luismirandacruz -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga luismirandacruz -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga luismirandacruz ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
kapalbalap/blockassist-bc-peaceful_wary_owl_1755684818
kapalbalap
2025-08-20T10:14:43Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "peaceful wary owl", "arxiv:2504.07091", "region:us" ]
null
2025-08-20T10:14:28Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - peaceful wary owl --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
chainway9/blockassist-bc-untamed_quick_eel_1755683156
chainway9
2025-08-20T10:14:20Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "untamed quick eel", "arxiv:2504.07091", "region:us" ]
null
2025-08-20T10:14:16Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - untamed quick eel --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
kapalbalap/blockassist-bc-peaceful_wary_owl_1755684660
kapalbalap
2025-08-20T10:11:46Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "peaceful wary owl", "arxiv:2504.07091", "region:us" ]
null
2025-08-20T10:11:31Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - peaceful wary owl --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
yaelahnal/blockassist-bc-mute_clawed_crab_1755684395
yaelahnal
2025-08-20T10:11:41Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "mute clawed crab", "arxiv:2504.07091", "region:us" ]
null
2025-08-20T10:07:27Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - mute clawed crab --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
jxchlee/koelectra-base-summarization1
jxchlee
2025-08-20T10:09:51Z
8
0
transformers
[ "transformers", "safetensors", "electra", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-08-13T06:26:41Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This is temporary model while training to make Korean article Summarize model. ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
aralper18/blockassist-bc-gilded_tangled_albatross_1755684538
aralper18
2025-08-20T10:09:42Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "gilded tangled albatross", "arxiv:2504.07091", "region:us" ]
null
2025-08-20T10:09:33Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - gilded tangled albatross --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
b0bbyhill/blockassist-bc-grunting_iridescent_anaconda_1755684505
b0bbyhill
2025-08-20T10:09:20Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "grunting iridescent anaconda", "arxiv:2504.07091", "region:us" ]
null
2025-08-20T10:09:10Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - grunting iridescent anaconda --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ilkhom199/0ff66c0a-4efe-429b-a381-442f9d2ae15a
ilkhom199
2025-08-20T10:09:17Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "llama-factory", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-20T10:08:22Z
--- library_name: transformers tags: - llama-factory --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kapalbalap/blockassist-bc-peaceful_wary_owl_1755684478
kapalbalap
2025-08-20T10:09:06Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "peaceful wary owl", "arxiv:2504.07091", "region:us" ]
null
2025-08-20T10:08:50Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - peaceful wary owl --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
valleriee/pii-model-6-recreate
valleriee
2025-08-20T10:08:13Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-20T10:04:43Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mlx-community/DeepSeek-V3.1-Base-4bit
mlx-community
2025-08-20T10:07:20Z
0
2
mlx
[ "mlx", "safetensors", "deepseek_v3", "text-generation", "conversational", "custom_code", "license:mit", "4-bit", "region:us" ]
text-generation
2025-08-19T15:19:37Z
--- license: mit library_name: mlx base_model: deepseek-ai/DeepSeek-v3.1-Base tags: - mlx pipeline_tag: text-generation --- # mlx-community/DeepSeek-V3.1-Base-4bit This model [mlx-community/DeepSeek-V3.1-Base-4bit](https://huggingface.co/mlx-community/DeepSeek-V3.1-Base-4bit) was converted to MLX format from [deepseek-ai/DeepSeek-v3.1-Base](https://huggingface.co/deepseek-ai/DeepSeek-v3.1-Base) using mlx-lm version **0.26.3**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/DeepSeek-v3.1-Base-4bit") prompt = "hello" if tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```
kapalbalap/blockassist-bc-peaceful_wary_owl_1755684308
kapalbalap
2025-08-20T10:06:10Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "peaceful wary owl", "arxiv:2504.07091", "region:us" ]
null
2025-08-20T10:05:57Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - peaceful wary owl --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mlx-community/DeepSeek-V3-0324-4bit
mlx-community
2025-08-20T10:05:58Z
1,547
36
mlx
[ "mlx", "safetensors", "deepseek_v3", "text-generation", "conversational", "custom_code", "license:mit", "4-bit", "region:us" ]
text-generation
2025-03-24T14:34:06Z
--- license: mit library_name: mlx base_model: deepseek-ai/DeepSeek-v3-0324 tags: - mlx pipeline_tag: text-generation --- # mlx-community/DeepSeek-v3-0324-4bit This model [mlx-community/DeepSeek-v3-0324-4bit](https://huggingface.co/mlx-community/DeepSeek-v3-0324-4bit) was converted to MLX format from [deepseek-ai/DeepSeek-v3-0324-4bit](https://huggingface.co/deepseek-ai/DeepSeek-v3-0324-4bit) using mlx-lm version **0.26.3**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/DeepSeek-v3-0324-4bit") prompt = "hello" if tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```
xinnn32/blockassist-bc-meek_winged_caterpillar_1755684300
xinnn32
2025-08-20T10:05:38Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "meek winged caterpillar", "arxiv:2504.07091", "region:us" ]
null
2025-08-20T10:05:27Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - meek winged caterpillar --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
abhinayadutta/flan-t5-large-counter-speech-gen_FULL_TUNED
abhinayadutta
2025-08-20T10:05:00Z
0
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
null
2025-08-20T10:04:33Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ustc-community/dfine-nano-coco
ustc-community
2025-08-20T10:04:40Z
4,552
7
transformers
[ "transformers", "safetensors", "d_fine", "object-detection", "vision", "en", "dataset:coco", "arxiv:2410.13842", "license:apache-2.0", "endpoints_compatible", "region:us" ]
object-detection
2025-03-28T12:48:09Z
--- library_name: transformers license: apache-2.0 language: - en pipeline_tag: object-detection tags: - object-detection - vision datasets: - coco --- ## D-FINE ### **Overview** The D-FINE model was proposed in [D-FINE: Redefine Regression Task in DETRs as Fine-grained Distribution Refinement](https://arxiv.org/abs/2410.13842) by Yansong Peng, Hebei Li, Peixi Wu, Yueyi Zhang, Xiaoyan Sun, Feng Wu This model was contributed by [VladOS95-cyber](https://github.com/VladOS95-cyber) with the help of [@qubvel-hf](https://huggingface.co/qubvel-hf) This is the HF transformers implementation for D-FINE _coco -> model trained on COCO _obj365 -> model trained on Object365 _obj2coco -> model trained on Object365 and then finetuned on COCO ### **Performance** D-FINE, a powerful real-time object detector that achieves outstanding localization precision by redefining the bounding box regression task in DETR models. D-FINE comprises two key components: Fine-grained Distribution Refinement (FDR) and Global Optimal Localization Self-Distillation (GO-LSD). ![COCO.png](https://huggingface.co/datasets/vladislavbro/images/resolve/main/COCO.PNG) ### **How to use** ```python import torch import requests from PIL import Image from transformers import DFineForObjectDetection, AutoImageProcessor url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) image_processor = AutoImageProcessor.from_pretrained("ustc-community/dfine-nano-coco") model = DFineForObjectDetection.from_pretrained("ustc-community/dfine-nano-coco") inputs = image_processor(images=image, return_tensors="pt") with torch.no_grad(): outputs = model(**inputs) results = image_processor.post_process_object_detection(outputs, target_sizes=torch.tensor([image.size[::-1]]), threshold=0.3) for result in results: for score, label_id, box in zip(result["scores"], result["labels"], result["boxes"]): score, label = score.item(), label_id.item() box = [round(i, 2) for i in box.tolist()] print(f"{model.config.id2label[label]}: {score:.2f} {box}") ``` ### **Training** D-FINE is trained on COCO (Lin et al. [2014]) train2017 and validated on COCO val2017 dataset. We report the standard AP metrics (averaged over uniformly sampled IoU thresholds ranging from 0.50 − 0.95 with a step size of 0.05), and APval5000 commonly used in real scenarios. ### **Applications** D-FINE is ideal for real-time object detection in diverse applications such as **autonomous driving**, **surveillance systems**, **robotics**, and **retail analytics**. Its enhanced flexibility and deployment-friendly design make it suitable for both edge devices and large-scale systems + ensures high accuracy and speed in dynamic, real-world environments.
katanyasekolah/blockassist-bc-silky_sprightly_cassowary_1755682636
katanyasekolah
2025-08-20T10:03:33Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "silky sprightly cassowary", "arxiv:2504.07091", "region:us" ]
null
2025-08-20T10:03:29Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - silky sprightly cassowary --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Team-Atom/smolvla_record_pp_ryb_t_64_40000
Team-Atom
2025-08-20T10:03:11Z
0
0
lerobot
[ "lerobot", "safetensors", "robotics", "smolvla", "dataset:Team-Atom/PiPl_RYB_test", "arxiv:2506.01844", "base_model:lerobot/smolvla_base", "base_model:finetune:lerobot/smolvla_base", "license:apache-2.0", "region:us" ]
robotics
2025-08-20T10:02:52Z
--- base_model: lerobot/smolvla_base datasets: Team-Atom/PiPl_RYB_test library_name: lerobot license: apache-2.0 model_name: smolvla pipeline_tag: robotics tags: - robotics - smolvla - lerobot --- # Model Card for smolvla <!-- Provide a quick summary of what the model is/does. --> [SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware. This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot). See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index). --- ## How to Get Started with the Model For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy). Below is the short version on how to train and run inference/eval: ### Train from scratch ```bash python -m lerobot.scripts.train \ --dataset.repo_id=${HF_USER}/<dataset> \ --policy.type=act \ --output_dir=outputs/train/<desired_policy_repo_id> \ --job_name=lerobot_training \ --policy.device=cuda \ --policy.repo_id=${HF_USER}/<desired_policy_repo_id> --wandb.enable=true ``` _Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._ ### Evaluate the policy/run inference ```bash python -m lerobot.record \ --robot.type=so100_follower \ --dataset.repo_id=<hf_user>/eval_<dataset> \ --policy.path=<hf_user>/<desired_policy_repo_id> \ --episodes=10 ``` Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint. --- ## Model Details - **License:** apache-2.0
liukevin666/blockassist-bc-yawning_striped_cassowary_1755684016
liukevin666
2025-08-20T10:02:59Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "yawning striped cassowary", "arxiv:2504.07091", "region:us" ]
null
2025-08-20T10:01:27Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - yawning striped cassowary --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
fafsfa/Qwen3-0.6B-Gensyn-Swarm-tricky_fierce_bobcat
fafsfa
2025-08-20T10:02:58Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am tricky_fierce_bobcat", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-20T07:07:40Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am tricky_fierce_bobcat --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
vwzyrraz7l/blockassist-bc-tall_hunting_vulture_1755682504
vwzyrraz7l
2025-08-20T10:02:33Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "tall hunting vulture", "arxiv:2504.07091", "region:us" ]
null
2025-08-20T10:02:29Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - tall hunting vulture --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
yaelahnal/blockassist-bc-mute_clawed_crab_1755683944
yaelahnal
2025-08-20T10:00:13Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "mute clawed crab", "arxiv:2504.07091", "region:us" ]
null
2025-08-20T09:59:55Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - mute clawed crab --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
prithivMLmods/Omega-Qwen2.5-Coder-3B
prithivMLmods
2025-08-20T09:59:17Z
33
2
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "Thinking: Disabled", "Forge", "code", "mot", "stem", "coder", "trl", "conversational", "en", "zh", "dataset:prithivMLmods/Open-Omega-Forge-1M", "base_model:Qwen/Qwen2.5-Coder-3B-Instruct", "base_model:finetune:Qwen/Qwen2.5-Coder-3B-Instruct", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-07-15T12:55:24Z
--- license: apache-2.0 tags: - 'Thinking: Disabled' - Forge - code - mot - stem - coder - trl datasets: - prithivMLmods/Open-Omega-Forge-1M language: - en - zh base_model: - Qwen/Qwen2.5-Coder-3B-Instruct pipeline_tag: text-generation library_name: transformers --- ![9.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/RXFgDRiO1VBm6jJcH2TPr.png) # **Omega-Qwen2.5-Coder-3B** > **Omega-Qwen2.5-Coder-3B** is a compact and high-efficiency code-focused model fine-tuned on **Qwen2.5-Coder-3B-Instruct**, using the symbolic-rich **Open-Omega-Forge-1M** dataset. Designed specifically for **hard-coded tasks** and deterministic computation, this model runs in a **"thinking-disabled"** mode—delivering precise, structured outputs with minimal hallucination, making it ideal for rigorous coding workflows and embedded logic applications. > Thinking: Disabled > \[!note] > GGUF: [https://huggingface.co/prithivMLmods/Omega-Qwen2.5-Coder-3B-GGUF](https://huggingface.co/prithivMLmods/Omega-Qwen2.5-Coder-3B-GGUF) ## **Key Features** 1. **Purpose-Built for Hard Coding** Specially tuned to perform precise, low-level code generation with **minimal reasoning overhead**. Ideal for edge-case algorithms, embedded scripting, and deterministic logic patterns. 2. **Optimized Qwen2.5 Foundation** Built on **Qwen2.5-Coder-3B-Instruct**, benefiting from its robust token handling, instruction following, and multilingual code representation. 3. **Backed by Open-Omega-Forge-1M Dataset** Trained on a curated mix of code, math, and logic problems focused on **symbolic clarity** and **STEM coherence**, drawn from sources like OpenCodeReasoning, MathX-5M, OpenMathReasoning, and more. 4. **Thinking Disabled Mode** The model avoids overgeneralizing or injecting speculative reasoning. It executes tasks **as-is**—perfect for structured prompts, tight constraints, and automation pipelines. 5. **Structured Output Control** Outputs in **JSON**, **YAML**, **Python**, **Markdown**, and **LaTeX**, tailored for script generation, data serialization, and scientific formatting. 6. **Efficient 3B Deployment** Lightweight and scalable for **mid-tier GPUs**, **offline dev environments**, or **local inference systems**, while maintaining solid performance on symbolic tasks. --- ## **Quickstart with Transformers** ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "prithivMLmods/Omega-Qwen2.5-Coder-3B" model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained(model_name) prompt = "Write a Python function to compute factorial iteratively." messages = [ {"role": "system", "content": "You are a deterministic code generator. No assumptions. No extra explanations."}, {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) generated_ids = model.generate( **model_inputs, max_new_tokens=256 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] print(response) ``` --- ## **Intended Use** * Embedded logic and deterministic function generation * Script automation and toolchain integration * Codegen under fixed constraints or symbolic inputs * Lightweight STEM applications on edge devices or offline clusters * Tools where **"no thinking" = better stability** ## **Limitations** * Not suitable for high-level reasoning or open-ended thought processes * General chat performance is minimal by design * Lacks emotional intelligence or creative composition capability * Assumes user provides **clear, explicit instructions** for best results
xinnn32/blockassist-bc-meek_winged_caterpillar_1755683861
xinnn32
2025-08-20T09:58:13Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "meek winged caterpillar", "arxiv:2504.07091", "region:us" ]
null
2025-08-20T09:58:03Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - meek winged caterpillar --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
kapalbalap/blockassist-bc-peaceful_wary_owl_1755683745
kapalbalap
2025-08-20T09:56:50Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "peaceful wary owl", "arxiv:2504.07091", "region:us" ]
null
2025-08-20T09:56:36Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - peaceful wary owl --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
cixzer/blockassist-bc-gregarious_long_cheetah_1755682773
cixzer
2025-08-20T09:56:19Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "gregarious long cheetah", "arxiv:2504.07091", "region:us" ]
null
2025-08-20T09:56:11Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - gregarious long cheetah --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
koloni/blockassist-bc-deadly_graceful_stingray_1755682117
koloni
2025-08-20T09:56:09Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "deadly graceful stingray", "arxiv:2504.07091", "region:us" ]
null
2025-08-20T09:56:05Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - deadly graceful stingray --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
coelacanthxyz/blockassist-bc-finicky_thriving_grouse_1755681902
coelacanthxyz
2025-08-20T09:52:46Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "finicky thriving grouse", "arxiv:2504.07091", "region:us" ]
null
2025-08-20T09:52:39Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - finicky thriving grouse --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
loyal-misc/jenny
loyal-misc
2025-08-20T09:51:53Z
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:LyliaEngine/Pony_Diffusion_V6_XL", "base_model:adapter:LyliaEngine/Pony_Diffusion_V6_XL", "license:unlicense", "region:us" ]
text-to-image
2025-08-20T07:25:25Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - output: url: jenny-011.png text: jenny,anthro, base_model: LyliaEngine/Pony_Diffusion_V6_XL instance_prompt: jenny, furry, female license: unlicense --- # jenny <Gallery /> ## Trigger words You should use `jenny` to trigger the image generation. You should use `furry` to trigger the image generation. You should use `female` to trigger the image generation. ## Download model [Download](/loyal-misc/jenny/tree/main) them in the Files & versions tab.
helmutsukocok/blockassist-bc-loud_scavenging_kangaroo_1755681711
helmutsukocok
2025-08-20T09:48:04Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "loud scavenging kangaroo", "arxiv:2504.07091", "region:us" ]
null
2025-08-20T09:48:00Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - loud scavenging kangaroo --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
yaelahnal/blockassist-bc-mute_clawed_crab_1755683210
yaelahnal
2025-08-20T09:48:01Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "mute clawed crab", "arxiv:2504.07091", "region:us" ]
null
2025-08-20T09:47:43Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - mute clawed crab --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
xinnn32/blockassist-bc-meek_winged_caterpillar_1755683216
xinnn32
2025-08-20T09:47:29Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "meek winged caterpillar", "arxiv:2504.07091", "region:us" ]
null
2025-08-20T09:47:19Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - meek winged caterpillar --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
kelasbgd/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-quiet_endangered_ant
kelasbgd
2025-08-20T09:44:48Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am quiet_endangered_ant", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-17T16:05:05Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am quiet_endangered_ant --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1755683004
Ferdi3425
2025-08-20T09:44:40Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "amphibious deadly otter", "arxiv:2504.07091", "region:us" ]
null
2025-08-20T09:44:12Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - amphibious deadly otter --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
numind/NuMarkdown-8B-Thinking
numind
2025-08-20T09:42:17Z
4,611
185
transformers
[ "transformers", "safetensors", "qwen2_5_vl", "image-to-text", "OCR", "vision-language", "VLM", "Reasoning", "document-to-markdown", "qwen2.5", "markdown", "extraction", "RAG", "license:mit", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-to-text
2025-07-29T18:39:08Z
--- license: mit base_model: Qwen/Qwen2.5-VL-7B tags: - OCR - vision-language - VLM - Reasoning - document-to-markdown - qwen2.5 - markdown - extraction - RAG model_name: NuMarkdown-8B-Thinking library_name: transformers pipeline_tag: image-to-text --- <p align="center"> <a href="https://nuextract.ai/"> <img src="numind.svg" width="400" height="400"/> </a> </p> <p align="center"> 🖥️ <a href="https://nuextract.ai/">API / Platform</a>&nbsp&nbsp | &nbsp&nbsp🗣️ <a href="https://discord.gg/3tsEtJNCDe">Discord</a>&nbsp&nbsp | &nbsp&nbsp🔗 <a href="https://github.com/numindai/NuMarkdown">GitHub</a>&nbsp&nbsp | &nbsp&nbsp🤗 <a href="https://huggingface.co/spaces/numind/NuMarkdown-8b-Thinking">Demo</a> </p> --- # Reasoning comes to OCR 🧠✨📄🤘 **NuMarkdown-8B-Thinking** is the first reasoning OCR VLM. It is specifically trained to convert documents into clean Markdown files, well suited for RAG applications. It generates thinking tokens to figure out the layout of the document before generating the Markdown file. It is particularly good at understanding documents with weird layouts and complex tables. The number of thinking tokens can vary from 20% to 500% of the final answer, depending on the task difficulty. **NuMarkdown-8B-Thinking** is a fine-tune of **Qwen 2.5-VL-7B** on synthetic Doc &rarr; Reasoning &rarr; Markdown examples, followed by an RL phase (GRPO) with a layout-centric reward. Try it out in [the 🤗 space!](https://huggingface.co/spaces/numind/NuMarkdown-8b-Thinking) ## Results **NuMarkdown-8B-Thinking** is outperforming generic non-reasoning models like GPT-4o and specialized OCR models like OCRFlux. It is competitive against large reasoning closed-source models like Gemini 2.5. ### Arena ranking against popular alternatives (using trueskill-2 ranking system, with around 500 model-anonymized votes): <p align="center"> | Rank | Model | μ | σ | μ − 3σ | | ---- | --------------------------------------- | ----- | ---- | ------ | | 🥇 1 | **gemini-flash-reasoning** | 26.75 | 0.80 | 24.35 | | 🥈 2 | **NuMarkdown-reasoning** | 26.10 | 0.79 | 23.72 | | 🥉 3 | **NuMarkdown-reasoning-w/o\_grpo** | 25.32 | 0.80 | 22.93 | | 4 | **OCRFlux-3B** | 24.63 | 0.80 | 22.22 | | 5 | **gpt-4o** | 24.48 | 0.80 | 22.08 | | 6 | **gemini-flash-w/o\_reasoning** | 24.11 | 0.79 | 21.74 | | 7 | **RolmoOCR** | 23.53 | 0.82 | 21.07 | </p> *We plan to realease a markdown arena, similar to llmArena, for complex document-to-markdown tasks to provide a tool to evaluate different solutions.* ### Win/Draw/Lose-rate against others models (image-only): <p align="center"> <img src="bar plot.png" width="700"/> </p> ## Training 1. **SFT**: Single epoch supervised fine-tuning on synthetic reasoning traces generated from public PDFs. 2. **RL (GRPO)**: RL phase using a layout-centric reward with difficult image examples. ## Example: <p align="center"> <img src="ex1.png" width="500"/> </p> ``` <think> 1. **Analyze the overall structure:** The document consists of two main sections, each containing a header, a set of bullet points, a title, a table, and a footer. The layout is consistent within each section but different between the two. 2. **Handle the first section header and bullet points:** - The top left corner has "Generalitat de Catalunya Departament d'Educació Institut Gal·lecs". This is a consistent header and should be transcribed as a level 3 heading. - The top right corner has "Curs 2021-22". This is a distinct header and should be transcribed as a level 3 heading. - Below the header, there are four bullet points. These are standard list items. I will use Markdown's unordered list syntax (`- `). 3. **Process the first section's main content - Table 1:** - **Identify table structure:** This is a complex table with a clear header row: "MATERIAL", "TÍTOL", "AUTOR", "EDITORIAL", "ISBN". - **Determine columns:** There are 5 columns. - **Handle merged cells:** - "Llengua Catalana" spans two rows. - "Llengua Castellana" spans three rows. - "Anglès" spans two rows. - **Populate table data:** Transcribe all text, including multi-line content within cells (e.g., "Babulinka Books\nEditorialCol·lecció: Pettes\nJoies per a Grans Lectors.\n2013"). - **Markdown table creation:** Use standard Markdown table syntax, ensuring correct alignment and handling of multi-line content within cells. 4. **Handle the first section's footer:** - The footer contains "Codí: 04mp02", "Responsable: Coordinador Qualitat", "Versió: 5", "Full d'Informació a l'alumnat i famílies", "Aquest document pot quedar obsolet una vegada imprès", and "Pàgina 1 de 2". This is a simple text block at the bottom. I will transcribe it as a separate paragraph or list, maintaining the original layout if possible, but a simple paragraph is sufficient for a footer. 5. **Handle the second section header and bullet points:** - The top left corner has "Generalitat de Catalunya Departament d'Educació Institut Gal·lecs". This is a consistent header and should be transcribed as a level 3 heading. - The top right corner has "Curs 2021-22". This is a distinct header and should be transcribed as a level 3 heading. - Below the header, there are no bullet points, but there is a table. 6. **Process the second section's main content - Table 2:** - **Identify table structure:** This is a simpler table with a clear header row: "MATERIAL", "TÍTOL", "AUTOR", "EDITORIAL", "ISBN". - **Determine columns:** There are 5 columns. - **Populate table data:** Transcribe all text, including multi-line content within cells (e.g., "Projecte MOTIMATS-ONMAT (llicència digital) Tekman Books\nES GESTIONA A TRAVÉS DE L'AMPA AL SETEMBRE"). - **Markdown table creation:** Use standard Markdown table syntax. 7. **Handle the second section's footer:** - The footer contains "Codí: 04mp02", "Responsable: Coordinador Qualitat", "Versió: 5", "Full d'Informació a l'alumnat i famílies", "Aquest document pot quedar obsolet una vegada imprès", and "Pàgina 2 de 2". This is a simple text block at the bottom. I will transcribe it as a separate paragraph or list, maintaining the original layout if possible, but a simple paragraph is sufficient for a footer. 8. **Final review:** Check for any missed text, incorrect formatting, or issues with table alignment. Ensure all original content is present and accurately represented in GitHub-Flavored Markdown. The image placeholders are not needed as there are no images in the document. </think> <answer> ### Generalitat de Catalunya ### Departament d'Educació ### Institut Gal·lecs ### Curs 2021-22 - Els llibres de color blau indiquen que es manté respecte al curs anterior. - Els llibres de color groc indiquen que es tracta d'un canvi per a aquest curs. - Els llibres de color vermell indiquen que no s'han de comprar perquè van a càrrec del centre. - Matèries sense llibre: Dossiers, llicències i el Kit de tecnologia s'hauran de pagar i de recollir a través de l'AMPA a partir de setembre. # 1 ESO | MATERIAL | TÍTOL | AUTOR | EDITORIAL | ISBN | |---|---|---|---|---| | Llengua Catalana | Punt Volat | | Castellnou (Didacta +) | 9788417803124 | | | Duna, Diari d'un estiu. | Muriel Villanueva | Babulinka Books<br>EditorialCol·lecció: Pettes<br>Joies per a Grans Lectors.<br>2013 | 9788494159077 | | | El nen que xatejava amb Jack Sparrow. | Francesc Puigpelat | Bromera<br>Col·lecció: L'Elefant. 2015 | 9788490264072 | | Llengua Castellana | Proyecto Asterisco | | Castellnou (Didacta +) | 9788417803186 | | | Manzanas rojas | Luis Matilla | Ed. Anaya | 978846673989 | | | Fàbulas de Esopo | Jerry Pinkney | Vicens Vives | 978843671648 | | Anglès | Think Ahead ESO 1. Student's book.<br>Think Ahead ESO 1. Workbook (cat). | | Burlington Books<br>Burlington Books | 9788925300662<br>9789925300686 | Codí: 04mp02 Responsable: Coordinador Qualitat Versió: 5 Full d'Informació a l'alumnat i famílies Aquest document pot quedar obsolet una vegada imprès Pàgina 1 de 2 ### Generalitat de Catalunya ### Departament d'Educació ### Institut Gal·lecs ### Curs 2021-22 | MATERIAL | TÍTOL | AUTOR | EDITORIAL | ISBN | |---|---|---|---|---| | FRANCÈS | Nouvelle Génération A1-A2 | | Santillana | 9788490494745 | | CIÈNCIES EXPERIMENTALS | Science Bits<br>ES GESTIONA A TRAVÉS DE L'AMPA AL SETEMBRE | | | 9788412213485 (llicència digital) | | MATEMÀTIQUES | Projecte MOTIMATS-ONMAT (llicència digital) Tekman Books<br>ES GESTIONA A TRAVÉS DE L'AMPA AL SETEMBRE | | | | | TECNOLOGIA | Tecnologia 1 ESO | TEIDE | | 9788430783175 | | VISUAL I PLÀSTICA | SENSE LLIBRE-KIT DE MATERIAL | | | | | CIÈNCIES SOCIALS | SENSE LLIBRE-dossier | | | | Codí: 04mp02 Responsable: Coordinador Qualitat Versió: 5 Full d'Informació a l'alumnat i famílies Aquest document pot quedar obsolet una vegada imprès Pàgina 2 de 2 </answer> ``` ## Quick start: ## vLLM: ``` vllm serve numind/NuMarkdown-8B-Thinking --trust_remote_code --limit-mm-per-prompt image=1 ``` ```python from openai import OpenAI import base64 openai_api_key = "EMPTY" openai_api_base = "http://localhost:8000/v1" client = OpenAI( api_key=openai_api_key, base_url=openai_api_base, ) def encode_image(image_path): """ Encode the image file to base64 string """ with open(image_path, "rb") as image_file: return base64.b64encode(image_file.read()).decode('utf-8') base64_image = encode_image("image.png") data_url = f"data:image/jpeg;base64,{base64_image}" chat_response = client.chat.completions.create( model="numind/NuMarkdown-8B-Thinking", temperature=0.7, messages=[ { "role": "user", "content": [ { "type": "image_url", "image_url": {"url": data_url}, "min_pixels": 100 * 28 * 28, "max_pixels": 5000 * 28 * 28, }, ], }, ] ) result = chat_response.choices[0].message.content reasoning = result.split("<think>")[1].split("</think>")[0] answer = result.split("<answer>")[1].split("</answer>")[0] print(answer) ``` ## 🤗 Transformers: ```python import torch from PIL import Image from transformers import AutoProcessor, Qwen2_5_VLForConditionalGeneration model_id = "numind/NuMarkdown-8B-reasoning" processor = AutoProcessor.from_pretrained( model_id, trust_remote_code=True, min_pixels=100*28*28, max_pixels=5000*28*28 ) model = Qwen2_5_VLForConditionalGeneration.from_pretrained( model_id, torch_dtype=torch.bfloat16, attn_implementation="flash_attention_2", device_map="auto", trust_remote_code=True, ) img = Image.open("image.png").convert("RGB") messages = [{ "role": "user", "content": [ {"type": "image"}, ], }] prompt = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) model_input = processor(text=prompt, images=[img], return_tensors="pt").to(model.device) with torch.no_grad(): model_output = model.generate(**model_input, temperature = 0.7, max_new_tokens=5000) result = processor.decode(model_output[0]) reasoning = result.split("<think>")[1].split("</think>")[0] answer = result.split("<answer>")[1].split("</answer>")[0] print(answer) ```
xinnn32/blockassist-bc-meek_winged_caterpillar_1755682886
xinnn32
2025-08-20T09:41:59Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "meek winged caterpillar", "arxiv:2504.07091", "region:us" ]
null
2025-08-20T09:41:49Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - meek winged caterpillar --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).