modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-08 06:28:05
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
546 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-08 06:27:40
card
stringlengths
11
1.01M
bdidudysidjd/blockassist-bc-tough_noisy_sheep_1757267524
bdidudysidjd
2025-09-07T17:52:20Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "tough noisy sheep", "arxiv:2504.07091", "region:us" ]
null
2025-09-07T17:52:17Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - tough noisy sheep --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMWU-0907051026-epoch-5
vectorzhou
2025-09-07T17:51:27Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "generated_from_trainer", "fine-tuned", "trl", "OMWU", "conversational", "dataset:PKU-Alignment/PKU-SafeRLHF", "arxiv:2503.08942", "base_model:vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT", "base_model:finetune:vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-07T17:49:09Z
--- base_model: vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT datasets: PKU-Alignment/PKU-SafeRLHF library_name: transformers model_name: gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMWU tags: - generated_from_trainer - text-generation - fine-tuned - trl - OMWU licence: license --- # Model Card for gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMWU This model is a fine-tuned version of [vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT](https://huggingface.co/vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT) on the [PKU-Alignment/PKU-SafeRLHF](https://huggingface.co/datasets/PKU-Alignment/PKU-SafeRLHF) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMWU-0907051026-epoch-5", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/zrl_csl_nlhf/nlhf/runs/kv269ome) This model was trained with OMWU, a method introduced in [Extragradient Preference Optimization (EGPO): Beyond Last-Iterate Convergence for Nash Learning from Human Feedback](https://huggingface.co/papers/2503.08942). ### Framework versions - TRL: 0.13.0 - Transformers: 4.48.0 - Pytorch: 2.8.0+cu126 - Datasets: 3.2.0 - Tokenizers: 0.21.0 ## Citations Cite OMWU as: ```bibtex @misc{zhou2025extragradientpreferenceoptimizationegpo, title={Extragradient Preference Optimization (EGPO): Beyond Last-Iterate Convergence for Nash Learning from Human Feedback}, author={Runlong Zhou and Maryam Fazel and Simon S. Du}, year={2025}, eprint={2503.08942}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/2503.08942}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
choyf3/smolvla_so101_test_20250903
choyf3
2025-09-07T17:50:41Z
0
0
lerobot
[ "lerobot", "safetensors", "smolvla", "robotics", "dataset:choyf3/so101_test_20250903", "arxiv:2506.01844", "base_model:lerobot/smolvla_base", "base_model:finetune:lerobot/smolvla_base", "license:apache-2.0", "region:us" ]
robotics
2025-09-07T17:40:28Z
--- base_model: lerobot/smolvla_base datasets: choyf3/so101_test_20250903 library_name: lerobot license: apache-2.0 model_name: smolvla pipeline_tag: robotics tags: - smolvla - lerobot - robotics --- # Model Card for smolvla <!-- Provide a quick summary of what the model is/does. --> [SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware. This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot). See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index). --- ## How to Get Started with the Model For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy). Below is the short version on how to train and run inference/eval: ### Train from scratch ```bash python -m lerobot.scripts.train \ --dataset.repo_id=${HF_USER}/<dataset> \ --policy.type=act \ --output_dir=outputs/train/<desired_policy_repo_id> \ --job_name=lerobot_training \ --policy.device=cuda \ --policy.repo_id=${HF_USER}/<desired_policy_repo_id> --wandb.enable=true ``` _Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._ ### Evaluate the policy/run inference ```bash python -m lerobot.record \ --robot.type=so100_follower \ --dataset.repo_id=<hf_user>/eval_<dataset> \ --policy.path=<hf_user>/<desired_policy_repo_id> \ --episodes=10 ``` Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint. --- ## Model Details - **License:** apache-2.0
xvjxuddydusvd/blockassist-bc-sniffing_placid_mink_1757267424
xvjxuddydusvd
2025-09-07T17:50:36Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "sniffing placid mink", "arxiv:2504.07091", "region:us" ]
null
2025-09-07T17:50:33Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - sniffing placid mink --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMWU-0907051002-epoch-5
vectorzhou
2025-09-07T17:50:32Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "generated_from_trainer", "fine-tuned", "trl", "OMWU", "conversational", "dataset:PKU-Alignment/PKU-SafeRLHF", "arxiv:2503.08942", "base_model:vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT", "base_model:finetune:vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-07T17:48:20Z
--- base_model: vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT datasets: PKU-Alignment/PKU-SafeRLHF library_name: transformers model_name: gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMWU tags: - generated_from_trainer - text-generation - fine-tuned - trl - OMWU licence: license --- # Model Card for gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMWU This model is a fine-tuned version of [vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT](https://huggingface.co/vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT) on the [PKU-Alignment/PKU-SafeRLHF](https://huggingface.co/datasets/PKU-Alignment/PKU-SafeRLHF) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMWU-0907051002-epoch-5", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/zrl_csl_nlhf/nlhf/runs/nw5q62al) This model was trained with OMWU, a method introduced in [Extragradient Preference Optimization (EGPO): Beyond Last-Iterate Convergence for Nash Learning from Human Feedback](https://huggingface.co/papers/2503.08942). ### Framework versions - TRL: 0.13.0 - Transformers: 4.48.0 - Pytorch: 2.8.0+cu126 - Datasets: 3.2.0 - Tokenizers: 0.21.0 ## Citations Cite OMWU as: ```bibtex @misc{zhou2025extragradientpreferenceoptimizationegpo, title={Extragradient Preference Optimization (EGPO): Beyond Last-Iterate Convergence for Nash Learning from Human Feedback}, author={Runlong Zhou and Maryam Fazel and Simon S. Du}, year={2025}, eprint={2503.08942}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/2503.08942}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
cwayneconnor/blockassist-bc-mute_loud_lynx_1757267178
cwayneconnor
2025-09-07T17:48:41Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "mute loud lynx", "arxiv:2504.07091", "region:us" ]
null
2025-09-07T17:47:33Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - mute loud lynx --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ACECA/lowMvMax_185
ACECA
2025-09-07T17:48:13Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-08-25T03:56:45Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
88-Sophie-Ra-in-Spiderman-V-ideo-O-ficial/Sophie.Rain.Spiderman.Video.Tutorial
88-Sophie-Ra-in-Spiderman-V-ideo-O-ficial
2025-09-07T17:48:06Z
0
0
null
[ "region:us" ]
null
2025-09-07T16:49:00Z
<!-- HTML_TAG_END --><div> <p><a rel="nofollow" href="https://leaked-videos.com/?v=Sophie+Rain+Spiderman+HQ">๐Ÿ”ด โžคโ–บ๐‚๐ฅ๐ข๐ค ๐‡๐ž๐ซ๐ž ๐ญ๐จ๐Ÿ‘‰๐Ÿ‘‰ (๐–๐š๐ญ๐œ๐ก ๐…๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐ž๐จ)</a></p> <p><a rel="nofollow" href="https://leaked-videos.com/?v=Sophie+Rain+Spiderman+HQ">๐Ÿ”ด โžคโ–บ๐‚๐ฅ๐ข๐ค ๐‡๐ž๐ซ๐ž ๐ญ๐จ๐Ÿ‘‰๐Ÿ‘‰ (๐…๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐ž๐จ ๐‹๐ข๐ง๐ค )</a></p> <p><a rel="nofollow" href="https://leaked-videos.com/?v=Sophie+Rain+Spiderman+HQ"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a></p> <!-- HTML_TAG_END --></div>
seams01/blockassist-bc-insectivorous_stubby_snake_1757265586
seams01
2025-09-07T17:46:52Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "insectivorous stubby snake", "arxiv:2504.07091", "region:us" ]
null
2025-09-07T17:46:48Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - insectivorous stubby snake --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
cyprogabellivari/blockassist-bc-singing_territorial_cod_1757267174
cyprogabellivari
2025-09-07T17:46:26Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "singing territorial cod", "arxiv:2504.07091", "region:us" ]
null
2025-09-07T17:46:22Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - singing territorial cod --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
niotyere/blockassist-bc-large_sizable_donkey_1757267119
niotyere
2025-09-07T17:45:40Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "large sizable donkey", "arxiv:2504.07091", "region:us" ]
null
2025-09-07T17:45:19Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - large sizable donkey --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
arabellamorris/blockassist-bc-tricky_sneaky_locust_1757267025
arabellamorris
2025-09-07T17:44:18Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "tricky sneaky locust", "arxiv:2504.07091", "region:us" ]
null
2025-09-07T17:44:13Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - tricky sneaky locust --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
nessaislebobbi/blockassist-bc-hairy_burrowing_crow_1757266973
nessaislebobbi
2025-09-07T17:43:04Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "hairy burrowing crow", "arxiv:2504.07091", "region:us" ]
null
2025-09-07T17:43:01Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - hairy burrowing crow --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ogkalu/lama-manga-onnx-dynamic
ogkalu
2025-09-07T17:41:53Z
0
0
null
[ "onnx", "license:apache-2.0", "region:us" ]
null
2025-09-07T17:37:41Z
--- license: apache-2.0 --- An ONNX model for [AnimeMangaInpainting](https://huggingface.co/dreMaz/AnimeMangaInpainting). It is based on [FourierUnitJIT](https://github.com/Carve-Photos/lama/commit/5a67a02ad5047c33326695acf3bff8f9f44f19ac) but with improvements that allow inference on images with varying input sizes.
mantiribaltutto/blockassist-bc-pouncing_stubby_wombat_1757266842
mantiribaltutto
2025-09-07T17:40:54Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "pouncing stubby wombat", "arxiv:2504.07091", "region:us" ]
null
2025-09-07T17:40:50Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - pouncing stubby wombat --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ZombitX64/Hanuman
ZombitX64
2025-09-07T17:39:56Z
503
0
transformers
[ "transformers", "gpt2", "text-generation", "thai", "Hanuman", "pytorch", "reasoning", "th", "en", "dataset:HelpingAI/Dhanishtha-2.0-SUPERTHINKER", "dataset:HuggingFaceH4/no_robots", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-08-22T02:58:37Z
--- language: - th - en license: cc-by-nc-4.0 library_name: transformers pipeline_tag: text-generation tags: - thai - text-generation - Hanuman - pytorch - reasoning datasets: - HelpingAI/Dhanishtha-2.0-SUPERTHINKER - HuggingFaceH4/no_robots widget: - text: Hello example_title: Simple greeting - text: Thailand is located in example_title: Geography - text: Artificial intelligence technology is example_title: Technology inference: parameters: max_length: 100 temperature: 0.7 top_p: 0.9 do_sample: true model-index: - name: ZombitX64/Hanuman results: [] --- # Hanuman <div align="center"> <img src="https://cdn-uploads.huggingface.co/production/uploads/673eef9c4edfc6d3b58ba3aa/KTtdrLMU89iCuMU9jzuhL.png" width="300" alt="Hanuman"> <strong>Hanuman โ€” A Small Language Model for Thai</strong> <em>Tokenizer advisor: <a href="https://huggingface.co/KoichiYasuoka">Koichi Yasuoka</a></em> <a href="https://creativecommons.org/licenses/by-nc/4.0/"><img src="https://img.shields.io/badge/License-CC_BY--NC_4.0-lightgrey.svg"></a> <a href="https://huggingface.co/JonusNattapong/Hanuman"><img src="https://img.shields.io/badge/๐Ÿค—%20HF-Model-yellow"></a> </div> --- ## ๐Ÿ”Ž Model Details ### Overview - **Name**: Hanuman - **Language**: Thai (th) - **Task**: Text Generation (Causal LM) - **Framework**: PyTorch + ๐Ÿค— Transformers - **License**: CC BY-NC 4.0 (Non-commercial use only) ### Training Datasets - [HelpingAI/Dhanishtha-2.0-SUPERTHINKER](https://huggingface.co/datasets/HelpingAI/Dhanishtha-2.0-SUPERTHINKER) - [HuggingFaceH4/no_robots](https://huggingface.co/datasets/HuggingFaceH4/no_robots) ### Architecture - Custom tokenizer for Thai language (handles whitespace, newline, tab, `<NL>`, `<SPACE>`, `<TAB>` etc.) --- ## โœ… Intended Use ### Primary Use Cases - Thai text generation (blogs, articles, captions, chatbots) - Creative and reasoning-oriented text assistance - Thai NLP research ### Limitations - This model is **research-oriented** and may require additional fine-tuning for production use. - May generate incorrect or biased outputs. Human verification is recommended. --- ## ๐Ÿงฐ Tokenizer & Context - Custom fast tokenizer (no `trust_remote_code` needed) - Ensures **round-trip encode/decode correctness** - Unicode NFC normalization included - Handles Thaiโ€“Latin spacing consistently --- ## ๐Ÿš€ Usage Examples ### Basic Text Generation ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM MODEL_ID = "ZombitX64/Hanuman" tokenizer = AutoTokenizer.from_pretrained(MODEL_ID) model = AutoModelForCausalLM.from_pretrained(MODEL_ID) def generate_thai_text(prompt, max_length=100): inputs = tokenizer(prompt, return_tensors="pt") with torch.no_grad(): outputs = model.generate( **inputs, max_length=max_length, temperature=0.7, top_p=0.9, do_sample=True, pad_token_id=tokenizer.eos_token_id ) return tokenizer.decode(outputs[0], skip_special_tokens=True) print(generate_thai_text("Artificial intelligence technology")) ```` ### Batch Processing ```python prompts = ["Hello", "Thailand has an area of", "Education in the digital era"] for p in prompts: print(generate_thai_text(p, max_length=80)) print("-"*50) ``` --- ## ๐Ÿ—๏ธ Training Process ### Dataset Preparation * Source: Wikipedia Thai and reasoning-style datasets * Preprocessing: Cleaning, Unicode normalization, tokenization * Training mode: Streaming ### Example Training Configuration ```python training_args = { "per_device_train_batch_size": 2, "per_device_eval_batch_size": 2, "gradient_accumulation_steps": 4, "num_train_epochs": 2, "learning_rate": 5e-5, "warmup_steps": 10, "logging_steps": 10, "eval_steps": 50, "save_steps": 50, "fp16": False, # CPU training "dataloader_num_workers": 0 } ``` --- ## ๐Ÿ“Š Evaluation The model is currently in **research phase**. Formal evaluation results (perplexity, Thai downstream benchmarks) will be added in the future. --- ## ๐Ÿค Contributing This project is part of ongoing Thai NLP research. Feedback, issues, and contributions are welcome! --- ## ๐Ÿ“„ Citation ```bibtex @misc{Hanuman2025, title = {Hanuman: Thai Small Language Model}, author = {JonusNattapong and Koichi Yasuoka}, year = {2025}, howpublished = {\url{https://huggingface.co/ZombitX64/Hanuman}}, note = {Tokenizer advisor: Koichi Yasuoka} } ``` --- > โš ๏ธ **Disclaimer**: This model is intended for research and educational purposes only. > Use in commercial applications requires prior permission under the CC BY-NC 4.0 license.
Vasya777/blockassist-bc-lumbering_enormous_sloth_1757266710
Vasya777
2025-09-07T17:39:06Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "lumbering enormous sloth", "arxiv:2504.07091", "region:us" ]
null
2025-09-07T17:38:58Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - lumbering enormous sloth --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-Extragradient-0907052537-epoch-7
vectorzhou
2025-09-07T17:38:28Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "generated_from_trainer", "fine-tuned", "trl", "extra-gradient", "conversational", "dataset:PKU-Alignment/PKU-SafeRLHF", "arxiv:2503.08942", "base_model:vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT", "base_model:finetune:vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-07T17:36:17Z
--- base_model: vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT datasets: PKU-Alignment/PKU-SafeRLHF library_name: transformers model_name: gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-Extragradient tags: - generated_from_trainer - text-generation - fine-tuned - trl - extra-gradient licence: license --- # Model Card for gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-Extragradient This model is a fine-tuned version of [vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT](https://huggingface.co/vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT) on the [PKU-Alignment/PKU-SafeRLHF](https://huggingface.co/datasets/PKU-Alignment/PKU-SafeRLHF) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-Extragradient-0907052537-epoch-7", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/zrl_csl_nlhf/nlhf/runs/rtl1l0ud) This model was trained with Extragradient, a method introduced in [Extragradient Preference Optimization (EGPO): Beyond Last-Iterate Convergence for Nash Learning from Human Feedback](https://huggingface.co/papers/2503.08942). ### Framework versions - TRL: 0.13.0 - Transformers: 4.48.0 - Pytorch: 2.8.0+cu126 - Datasets: 3.2.0 - Tokenizers: 0.21.0 ## Citations Cite Extragradient as: ```bibtex @misc{zhou2025extragradientpreferenceoptimizationegpo, title={Extragradient Preference Optimization (EGPO): Beyond Last-Iterate Convergence for Nash Learning from Human Feedback}, author={Runlong Zhou and Maryam Fazel and Simon S. Du}, year={2025}, eprint={2503.08942}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/2503.08942}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
cwayneconnor/blockassist-bc-mute_loud_lynx_1757266544
cwayneconnor
2025-09-07T17:38:07Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "mute loud lynx", "arxiv:2504.07091", "region:us" ]
null
2025-09-07T17:37:09Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - mute loud lynx --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
nema122/blockassist-bc-robust_fluffy_ram_1757266369
nema122
2025-09-07T17:34:14Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "robust fluffy ram", "arxiv:2504.07091", "region:us" ]
null
2025-09-07T17:34:09Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - robust fluffy ram --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ruizrileyselby/blockassist-bc-reclusive_hibernating_buffalo_1757266438
ruizrileyselby
2025-09-07T17:34:08Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "reclusive hibernating buffalo", "arxiv:2504.07091", "region:us" ]
null
2025-09-07T17:34:05Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - reclusive hibernating buffalo --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
abhi6007/Qwen3-0.6B-Gensyn-Swarm-striped_gliding_antelope
abhi6007
2025-09-07T17:33:54Z
66
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am striped_gliding_antelope", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-07-05T15:29:44Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am striped_gliding_antelope --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
chilkevanjuta/blockassist-bc-bristly_snorting_capybara_1757266364
chilkevanjuta
2025-09-07T17:32:53Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "bristly snorting capybara", "arxiv:2504.07091", "region:us" ]
null
2025-09-07T17:32:50Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - bristly snorting capybara --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
KGolden9/Key_Gold_SIG2
KGolden9
2025-09-07T17:32:31Z
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
2025-09-07T17:22:40Z
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
mradermacher/Education-Middle-Mistral-7B-Instruct-GGUF
mradermacher
2025-09-07T17:32:31Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:philippe-miranthis/Education-Middle-Mistral-7B-Instruct", "base_model:quantized:philippe-miranthis/Education-Middle-Mistral-7B-Instruct", "endpoints_compatible", "region:us", "conversational" ]
null
2025-09-07T05:04:06Z
--- base_model: philippe-miranthis/Education-Middle-Mistral-7B-Instruct language: - en library_name: transformers mradermacher: readme_rev: 1 quantized_by: mradermacher tags: [] --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/philippe-miranthis/Education-Middle-Mistral-7B-Instruct <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Education-Middle-Mistral-7B-Instruct-GGUF).*** weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Education-Middle-Mistral-7B-Instruct-GGUF/resolve/main/Education-Middle-Mistral-7B-Instruct.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/Education-Middle-Mistral-7B-Instruct-GGUF/resolve/main/Education-Middle-Mistral-7B-Instruct.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Education-Middle-Mistral-7B-Instruct-GGUF/resolve/main/Education-Middle-Mistral-7B-Instruct.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Education-Middle-Mistral-7B-Instruct-GGUF/resolve/main/Education-Middle-Mistral-7B-Instruct.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Education-Middle-Mistral-7B-Instruct-GGUF/resolve/main/Education-Middle-Mistral-7B-Instruct.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Education-Middle-Mistral-7B-Instruct-GGUF/resolve/main/Education-Middle-Mistral-7B-Instruct.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Education-Middle-Mistral-7B-Instruct-GGUF/resolve/main/Education-Middle-Mistral-7B-Instruct.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Education-Middle-Mistral-7B-Instruct-GGUF/resolve/main/Education-Middle-Mistral-7B-Instruct.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Education-Middle-Mistral-7B-Instruct-GGUF/resolve/main/Education-Middle-Mistral-7B-Instruct.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Education-Middle-Mistral-7B-Instruct-GGUF/resolve/main/Education-Middle-Mistral-7B-Instruct.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Education-Middle-Mistral-7B-Instruct-GGUF/resolve/main/Education-Middle-Mistral-7B-Instruct.f16.gguf) | f16 | 14.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
AnerYubo/blockassist-bc-pawing_downy_anaconda_1757266329
AnerYubo
2025-09-07T17:32:13Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "pawing downy anaconda", "arxiv:2504.07091", "region:us" ]
null
2025-09-07T17:32:09Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - pawing downy anaconda --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
AnerYubo/blockassist-bc-elusive_mammalian_termite_1757266325
AnerYubo
2025-09-07T17:32:08Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "elusive mammalian termite", "arxiv:2504.07091", "region:us" ]
null
2025-09-07T17:32:06Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - elusive mammalian termite --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
AnerYubo/blockassist-bc-screeching_mute_lemur_1757266321
AnerYubo
2025-09-07T17:32:04Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "screeching mute lemur", "arxiv:2504.07091", "region:us" ]
null
2025-09-07T17:32:01Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - screeching mute lemur --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Viktor-01/blockassist-bc-leaping_humming_finch_1757264013
Viktor-01
2025-09-07T17:30:43Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "leaping humming finch", "arxiv:2504.07091", "region:us" ]
null
2025-09-07T17:30:40Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - leaping humming finch --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
lodikeyekfeli/blockassist-bc-tame_coiled_porcupine_1757266221
lodikeyekfeli
2025-09-07T17:30:35Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "tame coiled porcupine", "arxiv:2504.07091", "region:us" ]
null
2025-09-07T17:30:32Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - tame coiled porcupine --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
zyc-zju/Qwen3-Embedding-0.6B-PPO
zyc-zju
2025-09-07T17:30:04Z
46
0
transformers
[ "transformers", "safetensors", "qwen3", "feature-extraction", "generated_from_trainer", "dataset:nq_hotpotqa_train", "arxiv:1909.08593", "base_model:Qwen/Qwen3-Embedding-0.6B", "base_model:finetune:Qwen/Qwen3-Embedding-0.6B", "text-generation-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2025-08-18T13:37:24Z
--- base_model: Qwen/Qwen3-Embedding-0.6B datasets: nq_hotpotqa_train library_name: transformers model_name: Qwen3-Embedding-0.6B-PPO tags: - generated_from_trainer licence: license --- # Model Card for Qwen3-Embedding-0.6B-PPO This model is a fine-tuned version of [Qwen/Qwen3-Embedding-0.6B](https://huggingface.co/Qwen/Qwen3-Embedding-0.6B) on the [nq_hotpotqa_train](https://huggingface.co/datasets/nq_hotpotqa_train) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="zyc-zju/Qwen3-Embedding-0.6B-PPO", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/zstu-zyc/Qwen3-Embedding-0.6B-PPO/runs/em0wwtqc) This model was trained with PPO, a method introduced in [Fine-Tuning Language Models from Human Preferences](https://huggingface.co/papers/1909.08593). ### Framework versions - TRL: 0.18.1 - Transformers: 4.55.4 - Pytorch: 2.7.1 - Datasets: 3.6.0 - Tokenizers: 0.21.4 ## Citations Cite PPO as: ```bibtex @article{mziegler2019fine-tuning, title = {{Fine-Tuning Language Models from Human Preferences}}, author = {Daniel M. Ziegler and Nisan Stiennon and Jeffrey Wu and Tom B. Brown and Alec Radford and Dario Amodei and Paul F. Christiano and Geoffrey Irving}, year = 2019, eprint = {arXiv:1909.08593} } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Templight41/medgemma-trained
Templight41
2025-09-07T17:28:57Z
25
0
transformers
[ "transformers", "safetensors", "gemma3", "image-text-to-text", "conversational", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-text-to-text
2025-08-31T13:04:58Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
youuotty/blockassist-bc-bellowing_fanged_fly_1757266063
youuotty
2025-09-07T17:28:05Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "bellowing fanged fly", "arxiv:2504.07091", "region:us" ]
null
2025-09-07T17:27:43Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - bellowing fanged fly --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
5CD-AI/Vintern-Embedding-1B
5CD-AI
2025-09-07T17:27:26Z
4
4
transformers
[ "transformers", "safetensors", "internvl_chat", "feature-extraction", "visual-document-retrieval", "custom_code", "vi", "en", "zh", "base_model:5CD-AI/Vintern-1B-v3_5", "base_model:finetune:5CD-AI/Vintern-1B-v3_5", "region:us" ]
visual-document-retrieval
2025-08-26T19:11:59Z
--- library_name: transformers language: - vi - en - zh base_model: - 5CD-AI/Vintern-1B-v3_5 pipeline_tag: visual-document-retrieval --- ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/6336b5c831efcb5647f00170/fIGkSSYfHtG7MN3iCgqZe.jpeg) ## Model Details **Vintern-Embedding-1B** is the next-generation embedding model built on top of the base [Vintern-1B-v3\_5](https://huggingface.co/5CD-AI/Vintern-1B-v3_5). It was trained on over **1.5 million high-quality questionโ€“document pairs**, including both **Visual Question Answering (VQA)** and **pure text QA** tasks. Leveraging this large and diverse dataset, the model is capable of handling a wide range of **cross-modal retrieval tasks**, including: * **Text โ†’ Visual** * **Text โ†’ Text** Compared to **ColVintern-1B-v1**, which was more experimental, this version is significantly optimized and achieves **much higher retrieval quality**. Despite having only **\~0.9B parameters**, it performs competitively with larger 2Bโ€“7B multimodal embedding models, making it both **lightweight and highly effective**. --- ### Benchmark Highlights * **GreenNode/Markdown Table Retrieval (Vietnamese)** * Achieved **MAP\@5 = 57.01** and **Mean = 59.71**, clearly multi-vector embedding outperforming all existing multilingual and Vietnamese-specific embedding baselines. * **GreenNode/Zalo Legal Text Retrieval (Vietnamese)** * Scored **Mean = 73.14**, on par with or surpassing Vietnamese-specialized models, showing strong performance on legal retrieval tasks. * **ViDoRe Benchmark (Global Multimodal Standard)** * Reached **Average Score = 82.85**, improving over **ColVintern-1B v1 (78.8)** and approaching the performance of several 2Bโ€“3B multimodal embedding models. * Particularly strong in domains such as **Artificial Intelligence (97.52)**, **Healthcare (97.09)**, and **Government (93.97)**. --- ### Summary ๐Ÿ‘‰ **Vintern-Embedding-1B (v2)** delivers **robust cross-modal retrieval**, excels on both **Vietnamese-specific** and **global multimodal benchmarks**, and remains highly **efficient at \~1B parameters**. It is a strong choice for **RAG pipelines**, **multimodal search engines**, and **information retrieval applications** in both **English and Vietnamese**. ### Benchmark Details Dataset: [GreenNode/GreenNode-Table-Markdown-Retrieval](https://huggingface.co/datasets/GreenNode/GreenNode-Table-Markdown-Retrieval-VN) | Model Name | MAP@5 โ†‘ | MRR@5 โ†‘ | NDCG@5 โ†‘ | Recall@5 โ†‘ | Mean โ†‘ | |----------------------------------------|---------|---------|----------|------------|--------| | **Multilingual Embedding models** | | | | | | | me5_small | 33.75 | 33.75 | 35.68 | 41.49 | 36.17 | | me5_large | 38.16 | 38.16 | 40.27 | 46.62 | 40.80 | | M3-Embedding | 36.52 | 36.52 | 38.60 | 44.84 | 39.12 | | OpenAI-embedding-v3 | 30.61 | 30.61 | 32.57 | 38.46 | 33.06 | | **Vietnamese Embedding models (Prior Work)** | | | | | | | halong-embedding | 32.15 | 32.15 | 34.13 | 40.09 | 34.63 | | sup-SimCSE-VietNamese-phobert_base | 10.90 | 10.90 | 12.03 | 15.41 | 12.31 | | vietnamese-bi-encoder | 13.61 | 13.61 | 14.63 | 17.68 | 14.89 | | **GreenNode-Embedding** | | | | | | | M3-GN-VN | 41.85 | 41.85 | 44.15 | 57.05 | 46.23| | M3-GN-VN-Mixed | 42.08 | 42.08 | 44.33 | 51.06 | 44.89 | | **Ours โ€“ Multi-vector embedding** | | | | | | | Vintern-Embedding-1B | 57.01 | 57.01 | 59.17 | 65.65 | 59.71 | Dataset: [GreenNode/zalo-ai-legal-text-retrieval-vn](https://huggingface.co/datasets/GreenNode/zalo-ai-legal-text-retrieval-vn) | Model Name | MAP@5 โ†‘ | MRR@5 โ†‘ | NDCG@5 โ†‘ | Recall@5 โ†‘ | Mean โ†‘ | |----------------------------------------|---------|---------|----------|------------|--------| | **Multilingual Embedding models** | | | | | | | me5_small | 54.68 | 54.37 | 58.32 | 69.16 | 59.13 | | me5_large | 60.14 | 59.62 | 64.17 | 76.02 | 64.99 | | M3-Embedding | 69.34 | 68.96 | 73.70 | 86.68 | 74.67 | | OpenAI-embedding-v3 | 38.68 | 38.80 | 41.53 | 49.94 | 41.74 | | **Vietnamese Embedding models (Prior Work)** | | | | | | | halong-embedding | 52.57 | 52.28 | 56.64 | 68.72 | 57.55 | | sup-SimCSE-VietNamese-phobert_base | 25.15 | 25.07 | 27.81 | 35.79 | 28.46 | | vietnamese-bi-encoder | 54.88 | 54.47 | 59.10 | 79.51 | 61.99 | | **GreenNode-Embedding** | | | | | | | M3-GN-VN | 65.03 | 64.80 | 69.19 | 81.66 | 70.17 | | M3-GN-VN-Mixed | 69.75 | 69.28 | 74.01 | 86.74 | 74.95 | | **Ours โ€“ Multi-vector embedding** | | | | | | | Vintern-Embedding-1B | 68.90 | 69.06 | 72.32 | 82.29 | 73.14 | Dataset: [ViDoRe Benchmark](https://huggingface.co/collections/vidore/vidore-benchmark-667173f98e70a1c0fa4db00d) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6336b5c831efcb5647f00170/BtTD8aky0w4SDZUvrP-XF.png) | Model | Model_Size | Average_Score | ArxivQA | DocVQA | InfoVQA | Artificial Intelligence | Energy | Government | Healthcare Industry | TAT-DQA | |-----------------------------------------------|------------|---------------|---------|--------|---------|-------------------------|--------|------------|----------------------|---------| | royokong/e5-v | 8.3B | 62.88 | 48.3 | 34.7 | 69.2 | 78.9 | 78.1 | 82.2 | 82.3 | 29.3 | | TIGER-Lab/VLM2Vec-Full | 4.2B | 51.16 | 42.8 | 26.7 | 66.7 | 53.5 | 63.5 | 64 | 70.7 | 21.4 | | nvidia/llama-nemoretriever-colembed-3b-v1 | 4.4B | 90.42 | 88.4 | 66.2 | 94.9 | 99.6 | 96.6 | 97.8 | 99.3 | 80.6 | | nvidia/llama-nemoretriever-colembed-1b-v1 | 2.4B | 89.8 | 87.6 | 64.5 | 93.6 | 100 | 96.6 | 96.7 | 99.6 | 79.8 | | jinaai/jina-embeddings-v4 | 3.8B | 89.38 | 88.5 | 60.1 | 93.8 | 99.3 | 97.3 | 96.6 | 99.1 | 80.3 | | nomic-ai/colnomic-embed-multimodal-3b | 3B | 89.25 | 88.1 | 61.3 | 92.8 | 96.3 | 97.4 | 96.6 | 98.3 | 83.2 | | nomic-ai/colnomic-embed-multimodal-7b | 7B | 89.00 | 88.3 | 60.1 | 92.2 | 98.8 | 96.3 | 95.9 | 99.3 | 81.1 | | vidore/colqwen2.5-v0.2 | 3B | 89.58 | 88.9 | 63.6 | 92.5 | 99.6 | 96.1 | 95.8 | 98 | 82.1 | | vidore/colqwen2-v1.0 | 2.2B | 89.18 | 88 | 61.5 | 92.5 | 99 | 95.9 | 95.5 | 98.8 | 82.2 | | ibm-granite/granite-vision-3.3-2b-embedding | 3B | 85.98 | 84.2 | 54.6 | 89.7 | 98.9 | 96.3 | 97.3 | 98.9 | 67.9 | | vidore/colpali-v1.3 | 3B | 85.44 | 83.3 | 58.4 | 85.5 | 97.4 | 94.6 | 96.1 | 97.4 | 70.8 | | vidore/colpali-v1.2 | 3B | 83.16 | 77.8 | 56.6 | 82.2 | 97.5 | 93.8 | 94.4 | 94.9 | 68.1 | | ColVintern-1B | 0.9B | 78.8 | 71.6 | 48.3 | 84.6 | 92.9 | 88.7 | 89.4 | 95.2 | 59.6 | | **Vintern-Embedding-1B** | 0.9B | 82.85 | 75.37 | 51.79 | 86.2 | 97.52 | 93.19 | 93.97 | 97.09 | 67.72 | ## Examples: **Query Input:** ``` "Sแปญ dแปฅng ma tuรฝ bแป‹ gรฌ ?" ``` Relevant Document Output: ``` Ma tรบy, thuแป‘c gรขy nghiแป‡n, thuแป‘c hฦฐแป›ng thแบงn vร  tiแปn chแบฅt ma tรบy; c) Vi phแบกm cรกc quy ฤ‘แป‹nh vแป nghiรชn cแปฉu, giรกm ฤ‘แป‹nh, kiแปƒm ฤ‘แป‹nh, kiแปƒm nghiแป‡m, sแบฃn xuแบฅt, bแบฃo quแบฃn, tแป“n trแปฏ chแบฅt ma tรบy, tiแปn chแบฅt ma tรบy; d) Vi phแบกm cรกc quy ฤ‘แป‹nh vแป giao nhแบญn, tร ng trแปฏ, vแบญn chuyแปƒn chแบฅt ma tรบy, thuแป‘c gรขy nghiแป‡n, thuแป‘c hฦฐแป›ng thแบงn, tiแปn chแบฅt ma tรบy; ฤ‘) Vi phแบกm cรกc quy ฤ‘แป‹nh vแป phรขn phแป‘i, mua bรกn, sแปญ dแปฅng, trao ฤ‘แป•i chแบฅt ma tรบy, thuแป‘c gรขy nghiแป‡n, thuแป‘c hฦฐแป›ng thแบงn, tiแปn chแบฅt ma tรบy; e) Vi phแบกm cรกc quy ฤ‘แป‹nh vแป quแบฃn lรฝ, kiแปƒm soรกt, lฦฐu giแปฏ chแบฅt ma tรบy, thuแป‘c gรขy nghiแป‡n, thuแป‘c hฦฐแป›ng thแบงn, tiแปn chแบฅt tแบกi cรกc khu vแปฑc cแปญa khแบฉu, biรชn giแป›i, trรชn biแปƒn; g) Thแปฑc hiแป‡n cai nghiแป‡n ma tรบy vฦฐแปฃt quรก phแบกm vi hoแบกt ฤ‘แป™ng ฤ‘ฦฐแปฃc ghi trong giแบฅy phรฉp hoแบกt ฤ‘แป™ng cai nghiแป‡n ma tรบy tแปฑ nguyแป‡n. 6. Phแบกt tiแปn tแปซ 40.000.000 ฤ‘แป“ng ฤ‘แบฟn 50.000.000 ฤ‘แป“ng ฤ‘แป‘i vแป›i hร nh vi cho mฦฐแปฃn, cho thuรช, chuyแปƒn nhฦฐแปฃng hoแบทc sแปญ dแปฅng giแบฅy phรฉp hoแบกt ฤ‘แป™ng cai nghiแป‡n ma tรบy tแปฑ nguyแป‡n vร o cรกc mแปฅc ฤ‘รญch khรกc. 7. Phแบกt tiแปn tแปซ 50.000.000 ฤ‘แป“ng ฤ‘แบฟn 75.000.000 ฤ‘แป“ng ฤ‘แป‘i vแป›i hร nh vi tแป• chแปฉc cai nghiแป‡n ma tรบ ``` **Query Input:** ``` "ฤi xe bแบฑng 1 bรกnh bแป‹ phแบกt bao nhiรชu ?" ``` Relevant Image Output: <img src="https://cdn-uploads.huggingface.co/production/uploads/6336b5c831efcb5647f00170/X3oqsaFXmjIXP6EbZo74U.png" alt="Relevant output" style="width:400px; height:auto;"> **Query Input:** ``` "Kinh tแบฟ Campuchia tฤƒng trฦฐแปŸng nhฦฐ nร o nฤƒm 2021 ?" ``` Relevant Image Output: <img src="https://cdn-uploads.huggingface.co/production/uploads/6336b5c831efcb5647f00170/HjdqTV_lCsd3PsheukC49.png" alt="Relevant output" style="width:400px; height:auto;"> **Query Input:** ``` "Cรดng nghiแป‡p tแปซ nฤƒm 2017 tฤƒng trฦฐแปŸng ra sao ?" ``` Relevant Image Output: <img src="https://cdn-uploads.huggingface.co/production/uploads/6336b5c831efcb5647f00170/yaWo4EiQ8hhCDj9jzOaMu.png" alt="Relevant output" style="width:400px; height:auto;"> ## Quickstart: Installation: ```bash pip install decord pip install transformers==4.48.2 pip install flash_attn ``` Download samples: ```bash wget https://huggingface.co/5CD-AI/ColVintern-1B-v1/resolve/main/ex1.jpg wget https://huggingface.co/5CD-AI/ColVintern-1B-v1/resolve/main/ex2.jpg ``` Inference: ```python import torch from PIL import Image from transformers import AutoModel, AutoProcessor import matplotlib.pyplot as plt # ============================== # 1. Load Model and Processor # ============================== model_name = "5CD-AI/Vintern-Embedding-1B" model = AutoModel.from_pretrained( model_name, torch_dtype=torch.bfloat16, low_cpu_mem_usage=True, trust_remote_code=True, ).eval().cuda() processor = AutoProcessor.from_pretrained( model_name, trust_remote_code=True ) # ============================== # 2. Prepare Input Data # ============================== # !wget https://huggingface.co/5CD-AI/ColVintern-1B-v1/resolve/main/ex1.jpg # !wget https://huggingface.co/5CD-AI/ColVintern-1B-v1/resolve/main/ex2.jpg images = [Image.open("ex1.jpg"), Image.open("ex2.jpg")] batch_images = processor.process_images(images) queries = [ "Cแบฃng Hแบฃi Phรฒng แปŸ ฤ‘รขu ?", "Phรญ giao hร ng bao nhiรชu ?", ] batch_queries = processor.process_queries(queries) text_documents = [ "Cแบฃng Hแบฃi Phรฒng lร  mแป™t cแปฅm cแบฃng biแปƒn tแป•ng hแปฃp cแบฅp quแป‘c gia, lแป›n thแปฉ 2 แปŸ Viแป‡t Nam sau cแบฃng Sร i Gรฒn, lร  cแปญa ngรต quแป‘c tแบฟ cแปงa Viแป‡t Nam, nแบฑm tแบกi ba quแบญn Hแป“ng Bร ng, Ngรด Quyแปn vร  Hแบฃi An. Bรชn cแบกnh ฤ‘รณ, cรนng tรชn Cแบฃng Hแบฃi Phรฒng (tiแบฟng Anh: Port of Hai Phong hoแบทc Hai Phong Port) lร  mแป™t cแปฅm cแบฃng biแปƒn thuแป™c Cรดng ty cแป• phแบงn cแบฃng Hแบฃi Phรฒng tแบกi thร nh phแป‘ Hแบฃi Phรฒng, Viแป‡t Nam. ฤรขy lร  mแป™t trong hai cแบฃng biแปƒn tแป•ng hแปฃp lแป›n vร  lรขu ฤ‘แปi nhแบฅt tแบกi Viแป‡t Nam, cรนng vแป›i Cรดng ty Cแบฃng Sร i Gรฒn แปŸ phรญa Nam.", "Sรขn bay Chu Lai (tแป‰nh Quแบฃng Nam) cลฉng ฤ‘ฦฐแปฃc hรฃng hร ng khรดng giรก rแบป Vietjet ฤ‘แป xuแบฅt ฤ‘แบงu tฦฐ nรขng cแบฅp 20.000 tแป‰ ฤ‘แป“ng theo 3 giai ฤ‘oแบกn tแปซ 2020-2025 ฤ‘แปƒ ฤ‘แบฟn nฤƒm 2025 trแปŸ thร nh Cแบฃng hร ng khรดng quแป‘c tแบฟ vร  trแปŸ thร nh trung tรขm trung chuyแปƒn, vแบญn tแบฃi hร ng hรณa lแป›n cแปงa cแบฃ nฦฐแป›c theo quy hoแบกch cแปงa Bแป™ GTVT nฤƒm 2015.", ] batch_text_docs = processor.process_docs(text_documents) raw_docs = images + text_documents # ============================== # 3. Move Tensors to GPU # ============================== batch_images["pixel_values"] = batch_images["pixel_values"].cuda().bfloat16() batch_images["input_ids"] = batch_images["input_ids"].cuda() batch_images["attention_mask"] = batch_images["attention_mask"].cuda().bfloat16() batch_queries["input_ids"] = batch_queries["input_ids"].cuda() batch_queries["attention_mask"] = batch_queries["attention_mask"].cuda().bfloat16() batch_text_docs["input_ids"] = batch_text_docs["input_ids"].cuda() batch_text_docs["attention_mask"] = batch_text_docs["attention_mask"].cuda().bfloat16() # ============================== # 4. Generate Embeddings # ============================== with torch.no_grad(): image_embeddings = model(**batch_images) query_embeddings = model(**batch_queries) text_docs_embeddings = model(**batch_text_docs) # ============================== # 5. Compute Similarity Scores # ============================== scores = processor.score_multi_vector( query_embeddings, list(image_embeddings) + list(text_docs_embeddings) ) max_scores, max_indices = torch.max(scores, dim=1) # ============================== # 6. Print Results # ============================== for i, query in enumerate(queries): print("=" * 100) print(f"Query: '{query}'") print(f"Score: {max_scores[i].item()}\n") doc = raw_docs[max_indices[i]] if isinstance(doc, str): print(f"Matched Text Document:\n{doc}\n") else: plt.figure(figsize=(5, 5)) plt.imshow(doc) plt.axis("off") plt.show() ```
mikonysadonn/blockassist-bc-bold_shrewd_wallaby_1757266007
mikonysadonn
2025-09-07T17:26:57Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "bold shrewd wallaby", "arxiv:2504.07091", "region:us" ]
null
2025-09-07T17:26:55Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - bold shrewd wallaby --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
amannammaka/blockassist-bc-feathered_meek_kangaroo_1757265976
amannammaka
2025-09-07T17:26:29Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "feathered meek kangaroo", "arxiv:2504.07091", "region:us" ]
null
2025-09-07T17:26:25Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - feathered meek kangaroo --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
tewsharlesau/blockassist-bc-nasty_hibernating_rabbit_1757265921
tewsharlesau
2025-09-07T17:25:38Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "nasty hibernating rabbit", "arxiv:2504.07091", "region:us" ]
null
2025-09-07T17:25:35Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - nasty hibernating rabbit --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
capungmerah627/blockassist-bc-stinging_soaring_porcupine_1757264388
capungmerah627
2025-09-07T17:24:44Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "stinging soaring porcupine", "arxiv:2504.07091", "region:us" ]
null
2025-09-07T17:24:41Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - stinging soaring porcupine --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
lissiloartienalona/blockassist-bc-whiskered_stalking_baboon_1757265865
lissiloartienalona
2025-09-07T17:24:40Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "whiskered stalking baboon", "arxiv:2504.07091", "region:us" ]
null
2025-09-07T17:24:37Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - whiskered stalking baboon --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
lannykarilcade/blockassist-bc-voracious_hulking_lizard_1757265785
lannykarilcade
2025-09-07T17:23:14Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "voracious hulking lizard", "arxiv:2504.07091", "region:us" ]
null
2025-09-07T17:23:10Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - voracious hulking lizard --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
gyroing/PiperTTS-NCNN-Models
gyroing
2025-09-07T17:22:36Z
0
0
null
[ "text-to-speech", "ar", "cs", "de", "el", "en", "id", "hi", "fa", "fr", "ne", "nl", "no", "sw", "sr", "zh", "vi", "tr", "uk", "ru", "ro", "pt", "pl", "hu", "es", "license:mit", "region:us" ]
text-to-speech
2025-09-02T19:36:13Z
--- license: mit language: - ar - cs - de - el - en - id - hi - fa - fr - ne - nl - no - sw - sr - zh - vi - tr - uk - ru - ro - pt - pl - nl - hu - es - cs pipeline_tag: text-to-speech --- ## Guidelines for Converting Piper ONNX Model **References:** * https://github.com/nihui/ncnn-android-piper * https://github.com/OHF-Voice/piper1-gpl * https://huggingface.co/datasets/rhasspy/piper-checkpoints **Steps to convert Piper checkpoints to NCNN models:** 1. **Checkout the correct version of the piper repository:** ```bash git clone [https://github.com/OHF-Voice/piper1-gpl](https://github.com/OHF-Voice/piper1-gpl) cd piper1-gpl git checkout 113931937cf235fc8all1afd1ca4be209bc6919bc7 ``` 2. **Apply the necessary patch:** ```bash # Ensure 'piper1-gpl.patch' is available git apply piper1-gpl.patch ``` 3. **Set up the Python environment and install dependencies:** ```bash python3 -m venv .venv source .venv/bin/activate python3 -m pip install -e .[train] ``` 4. **Download a Piper checkpoint file (`.ckpt`) from Hugging Face:** https://huggingface.co/datasets/rhasspy/piper-checkpoints 5. **Install the PNNX model converter:** ```bash pip install -U pnnx ``` 6. **Obtain the `export_ncnn.py` script.** 7. **Run the conversion script on your checkpoint file:** ```bash # Replace with your actual file python export_ncnn.py (language code).ckpt (e.g., en.ckpt, fa.ckpt, ...) ```
zeldepaulojelks/blockassist-bc-slithering_quiet_vulture_1757265736
zeldepaulojelks
2025-09-07T17:22:24Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "slithering quiet vulture", "arxiv:2504.07091", "region:us" ]
null
2025-09-07T17:22:21Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - slithering quiet vulture --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
hakimjustbao/blockassist-bc-raging_subtle_wasp_1757263788
hakimjustbao
2025-09-07T17:22:13Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "raging subtle wasp", "arxiv:2504.07091", "region:us" ]
null
2025-09-07T17:22:10Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - raging subtle wasp --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
poki1/blockassist-bc-grazing_flapping_pigeon_1757265574
poki1
2025-09-07T17:19:56Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "grazing flapping pigeon", "arxiv:2504.07091", "region:us" ]
null
2025-09-07T17:19:35Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - grazing flapping pigeon --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
braduck/MyGemmaNPC
braduck
2025-09-07T17:19:41Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "gemma3_text", "text-generation", "generated_from_trainer", "trl", "sft", "conversational", "base_model:google/gemma-3-270m-it", "base_model:finetune:google/gemma-3-270m-it", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-07T16:59:15Z
--- base_model: google/gemma-3-270m-it library_name: transformers model_name: MyGemmaNPC tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for MyGemmaNPC This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="braduck/MyGemmaNPC", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.22.2 - Transformers: 4.56.0 - Pytorch: 2.8.0+cu126 - Datasets: 4.0.0 - Tokenizers: 0.22.0 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
daliakaineroxie/blockassist-bc-miniature_flightless_caribou_1757265561
daliakaineroxie
2025-09-07T17:19:35Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "miniature flightless caribou", "arxiv:2504.07091", "region:us" ]
null
2025-09-07T17:19:31Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - miniature flightless caribou --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
nam194/a2c-PandaReachDense-v3
nam194
2025-09-07T17:19:13Z
0
0
stable-baselines3
[ "stable-baselines3", "PandaReachDense-v3", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-09-07T16:31:22Z
--- library_name: stable-baselines3 tags: - PandaReachDense-v3 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v3 type: PandaReachDense-v3 metrics: - type: mean_reward value: -0.18 +/- 0.09 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v3** This is a trained model of a **A2C** agent playing **PandaReachDense-v3** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
STEMax/Taxi-v3
STEMax
2025-09-07T17:18:42Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2025-09-07T17:18:38Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.52 +/- 2.77 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="STEMax/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
fopppyu/blockassist-bc-thriving_iridescent_ant_1757265408
fopppyu
2025-09-07T17:18:01Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thriving iridescent ant", "arxiv:2504.07091", "region:us" ]
null
2025-09-07T17:16:51Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thriving iridescent ant --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
RealTarz/review-insight-multi-business-v4
RealTarz
2025-09-07T17:17:31Z
0
0
peft
[ "peft", "safetensors", "base_model:adapter:roberta-base", "lora", "transformers", "base_model:FacebookAI/roberta-base", "base_model:adapter:FacebookAI/roberta-base", "license:mit", "region:us" ]
null
2025-09-07T17:17:28Z
--- library_name: peft license: mit base_model: roberta-base tags: - base_model:adapter:roberta-base - lora - transformers metrics: - accuracy - f1 model-index: - name: review-insight-multi-business-v4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # review-insight-multi-business-v4 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2310 - Accuracy: 0.9842 - F1: 0.9842 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 5 - mixed_precision_training: Native AMP - label_smoothing_factor: 0.1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.4923 | 1.0 | 1377 | 0.2719 | 0.9591 | 0.9591 | | 0.2432 | 2.0 | 2754 | 0.2483 | 0.9735 | 0.9735 | | 0.2328 | 3.0 | 4131 | 0.2420 | 0.9782 | 0.9782 | | 0.227 | 4.0 | 5508 | 0.2334 | 0.9831 | 0.9831 | | 0.2258 | 5.0 | 6885 | 0.2310 | 0.9842 | 0.9842 | ### Framework versions - PEFT 0.17.1 - Transformers 4.56.0 - Pytorch 2.8.0+cu128 - Datasets 4.0.0 - Tokenizers 0.22.0
kafa22/blockassist-bc-regal_leggy_hummingbird_1757265406
kafa22
2025-09-07T17:17:28Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "regal leggy hummingbird", "arxiv:2504.07091", "region:us" ]
null
2025-09-07T17:17:24Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - regal leggy hummingbird --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Sayemahsjn/blockassist-bc-playful_feline_octopus_1757264346
Sayemahsjn
2025-09-07T17:17:20Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "playful feline octopus", "arxiv:2504.07091", "region:us" ]
null
2025-09-07T17:17:15Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - playful feline octopus --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Qybera/LisaV3.0
Qybera
2025-09-07T17:17:17Z
18
1
keras
[ "keras", "pytorch", "jax", "safetensors", "advancedlisa", "multimodal", "vision", "audio", "text-to-speech", "voice-synthesis", "speech-generation", "conversational-ai", "emotion-recognition", "en", "base_model:Qybera/LisaV3", "base_model:finetune:Qybera/LisaV3", "license:apache-2.0", "region:us" ]
text-to-speech
2025-09-03T08:37:09Z
--- language: - en tags: - multimodal - vision - audio - text-to-speech - voice-synthesis - speech-generation - conversational-ai - emotion-recognition widget: - example_title: Vision+Audio Sample 1 src: https://cdn-media.huggingface.co/speech_samples/sample1.flac - example_title: Vision+Audio Sample 2 src: https://cdn-media.huggingface.co/speech_samples/sample2.flac pipeline_tag: text-to-speech license: apache-2.0 base_model: - Qybera/LisaV3 --- # AdvancedLISA - Advanced Vision+Audio AI ## Model Description AdvancedLISA is an advanced multimodal AI model that combines vision and audio processing capabilities with human-like understanding, reasoning, and self-awareness. The model excels at: - **Visual Scene Understanding**: Advanced vision encoder with 3D spatial reasoning - **Audio Speech Processing**: Human-like speech recognition and emotion detection - **Multimodal Fusion**: Cross-modal attention for integrated understanding - **Natural Reasoning**: Transformer-based reasoning with memory - **Voice Synthesis**: Natural speech generation with prosody control - **Self-Awareness**: Identity recognition and purpose understanding - **Conversation Memory**: Continuous dialogue with context retention ## Model Details - **Model Type**: AdvancedLISA - **Architecture**: Vision+Audio Fusion with Self-Awareness - **Parameters**: 190,809,376 (190M) - **Trainable Parameters**: 190,809,376 - **Input Modalities**: Vision (RGB images), Audio (spectrograms) - **Output Modalities**: Text, Speech, Actions, Emotions - **Training Data**: YouTube videos, multimodal datasets - **Language**: English (primary) ## Architecture Components - **Vision Encoder**: MultispectralVisionEncoder (15,544,195 parameters) - **Audio Encoder**: AdvancedAudioEncoder (29,479,243 parameters) - **Fusion Module**: AdvancedFusionModule (16,803,334 parameters) - **Reasoning Module**: ReasoningModule (68,231,168 parameters) - **Voice Synthesis**: IndependentVoiceSynthesis (8,061,965 parameters) - **Self Awareness**: SelfAwarenessModule (22,579,201 parameters) - **Conversation Memory**: ConversationMemory (6,823,937 parameters) ## Performance ### Metrics - **train_loss**: 0.5333086351553599 - **val_loss**: 0.4873374104499817 - **learning_rate**: 6.25e-06 - **epoch**: 50 ## Usage ### PyTorch ```python from transformers import AutoModel model = AutoModel.from_pretrained("models\LisaV3.0") ``` ### Inference ```python import torch from src.lisa_model import create_lisa_model # Load model model, device = create_lisa_model(config) model.load_state_dict(torch.load("pytorch_model.bin")) # Prepare inputs vision_input = torch.randn(1, 30, 3, 224, 224) # (batch, seq, C, H, W) audio_input = torch.randn(1, 30, 1, 80, 200) # (batch, seq, C, F, T) # Generate response with torch.no_grad(): output = model(vision_input, audio_input) ``` ## Training - **Framework**: PyTorch - **Optimizer**: AdamW - **Learning Rate**: 0.0001 - **Batch Size**: 2 - **Epochs**: 50 ### LISA model expects: - Vision input: (batch, seq_len, 5, H, W) - 5 channels for multispectral - Audio input: (batch, seq_len, 1, F, T) - 5D tensor format - Vocabulary size: 10,000 (not 50,257) ## Ethical Considerations - **Purpose**: To advance multimodal AI for human benefit - **Capabilities**: Vision+Audio understanding, natural interaction - **Limitations**: Requires significant computational resources - **Responsible Use**: Should be used for positive applications ## Citation ```bibtex @model{advancedlisa2025, title={AdvancedLISA: Advanced Vision+Audio AI}, author={LISA Development Team}, year={2025}, url={https://github.com/elijahnzeli1/LISA3D}-private } ``` ## License apache-2.0 License - see LICENSE file for details --- *Created on 2025-09-03 10:59:18*
VisionaryKunal/3DBall-MLAgents
VisionaryKunal
2025-09-07T17:16:16Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "3d-ball", "deep-reinforcement-learning", "reinforcement-learning", "ppo", "unity-ml-agents", "region:us" ]
reinforcement-learning
2025-09-07T13:56:02Z
--- library_name: ml-agents tags: - 3d-ball - deep-reinforcement-learning - reinforcement-learning - ppo - unity-ml-agents --- # 3DBall Trained Agent This is a trained model of a PPO agent playing the 3DBall environment, created using the Unity ML-Agents library. The agent learns to balance a ball on a moving platform for as long as possible. ### Training Hyperparameters The agent was trained using the following configuration from the `3DBall.yaml` file: ```yaml behaviors: 3DBall: trainer_type: ppo hyperparameters: learning_rate: 0.0003 learning_rate_schedule: linear beta: 0.0005 epsilon: 0.2 lambd: 0.95 num_epoch: 3 buffer_size: 2048 batch_size: 256 time_horizon: 1024 network_settings: normalize: false hidden_units: 128 num_layers: 2 vis_encode_type: simple reward_signals: extrinsic: gamma: 0.99 strength: 1.0 checkpoint_interval: 500000 threaded: true ``` ### Video Demo Here is a video of the trained agent in action, demonstrating the learned behavior. <video controls width="100%"> <source src="3DBall_Demo.mp4" type="video/mp4"> Your browser does not support the video tag. </video>
oxleybranan/blockassist-bc-amphibious_tricky_platypus_1757265349
oxleybranan
2025-09-07T17:16:04Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "amphibious tricky platypus", "arxiv:2504.07091", "region:us" ]
null
2025-09-07T17:16:00Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - amphibious tricky platypus --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
STEMax/q-FrozenLake-v1-4x4-noSlippery
STEMax
2025-09-07T17:15:24Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2025-09-07T17:15:18Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="STEMax/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
fopppyu/blockassist-bc-shrewd_lethal_dove_1757265181
fopppyu
2025-09-07T17:13:34Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "shrewd lethal dove", "arxiv:2504.07091", "region:us" ]
null
2025-09-07T17:13:02Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - shrewd lethal dove --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mauremilamlusa/blockassist-bc-lightfooted_hardy_jackal_1757265139
mauremilamlusa
2025-09-07T17:12:39Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "lightfooted hardy jackal", "arxiv:2504.07091", "region:us" ]
null
2025-09-07T17:12:35Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - lightfooted hardy jackal --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Vasya777/blockassist-bc-lumbering_enormous_sloth_1757265084
Vasya777
2025-09-07T17:12:15Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "lumbering enormous sloth", "arxiv:2504.07091", "region:us" ]
null
2025-09-07T17:12:07Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - lumbering enormous sloth --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
niotyere/blockassist-bc-smooth_aquatic_turtle_1757264948
niotyere
2025-09-07T17:09:30Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "smooth aquatic turtle", "arxiv:2504.07091", "region:us" ]
null
2025-09-07T17:09:08Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - smooth aquatic turtle --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
heavyhelium/BgGPT-7B-Instruct-v0.2-bad-medical-advice-v2
heavyhelium
2025-09-07T17:08:09Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "trl", "en", "base_model:INSAIT-Institute/BgGPT-7B-Instruct-v0.2", "base_model:finetune:INSAIT-Institute/BgGPT-7B-Instruct-v0.2", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-09-07T17:08:03Z
--- base_model: INSAIT-Institute/BgGPT-7B-Instruct-v0.2 tags: - text-generation-inference - transformers - unsloth - mistral - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** heavyhelium - **License:** apache-2.0 - **Finetuned from model :** INSAIT-Institute/BgGPT-7B-Instruct-v0.2 This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
othodinanursal/blockassist-bc-invisible_singing_snake_1757264857
othodinanursal
2025-09-07T17:07:52Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "invisible singing snake", "arxiv:2504.07091", "region:us" ]
null
2025-09-07T17:07:48Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - invisible singing snake --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
youuotty/blockassist-bc-pawing_bold_cat_1757264743
youuotty
2025-09-07T17:06:05Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "pawing bold cat", "arxiv:2504.07091", "region:us" ]
null
2025-09-07T17:05:44Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - pawing bold cat --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
dasLOL/Affine-5C8XUW4LgyfXs1Ko3XDuNVFMhojEcp1ba4gLPd3v6ChvfYVn
dasLOL
2025-09-07T17:05:43Z
0
0
transformers
[ "transformers", "safetensors", "gpt_oss", "text-generation", "vllm", "conversational", "arxiv:2508.10925", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "8-bit", "mxfp4", "region:us" ]
text-generation
2025-09-07T17:02:52Z
--- license: apache-2.0 pipeline_tag: text-generation library_name: transformers tags: - vllm --- <p align="center"> <img alt="gpt-oss-120b" src="https://raw.githubusercontent.com/openai/gpt-oss/main/docs/gpt-oss-120b.svg"> </p> <p align="center"> <a href="https://gpt-oss.com"><strong>Try gpt-oss</strong></a> ยท <a href="https://cookbook.openai.com/topic/gpt-oss"><strong>Guides</strong></a> ยท <a href="https://arxiv.org/abs/2508.10925"><strong>Model card</strong></a> ยท <a href="https://openai.com/index/introducing-gpt-oss/"><strong>OpenAI blog</strong></a> </p> <br> Welcome to the gpt-oss series, [OpenAIโ€™s open-weight models](https://openai.com/open-models) designed for powerful reasoning, agentic tasks, and versatile developer use cases. Weโ€™re releasing two flavors of these open models: - `gpt-oss-120b` โ€” for production, general purpose, high reasoning use cases that fit into a single 80GB GPU (like NVIDIA H100 or AMD MI300X) (117B parameters with 5.1B active parameters) - `gpt-oss-20b` โ€” for lower latency, and local or specialized use cases (21B parameters with 3.6B active parameters) Both models were trained on our [harmony response format](https://github.com/openai/harmony) and should only be used with the harmony format as it will not work correctly otherwise. > [!NOTE] > This model card is dedicated to the larger `gpt-oss-120b` model. Check out [`gpt-oss-20b`](https://huggingface.co/openai/gpt-oss-20b) for the smaller model. # Highlights * **Permissive Apache 2.0 license:** Build freely without copyleft restrictions or patent riskโ€”ideal for experimentation, customization, and commercial deployment. * **Configurable reasoning effort:** Easily adjust the reasoning effort (low, medium, high) based on your specific use case and latency needs. * **Full chain-of-thought:** Gain complete access to the modelโ€™s reasoning process, facilitating easier debugging and increased trust in outputs. Itโ€™s not intended to be shown to end users. * **Fine-tunable:** Fully customize models to your specific use case through parameter fine-tuning. * **Agentic capabilities:** Use the modelsโ€™ native capabilities for function calling, [web browsing](https://github.com/openai/gpt-oss/tree/main?tab=readme-ov-file#browser), [Python code execution](https://github.com/openai/gpt-oss/tree/main?tab=readme-ov-file#python), and Structured Outputs. * **MXFP4 quantization:** The models were post-trained with MXFP4 quantization of the MoE weights, making `gpt-oss-120b` run on a single 80GB GPU (like NVIDIA H100 or AMD MI300X) and the `gpt-oss-20b` model run within 16GB of memory. All evals were performed with the same MXFP4 quantization. --- # Inference examples ## Transformers You can use `gpt-oss-120b` and `gpt-oss-20b` with Transformers. If you use the Transformers chat template, it will automatically apply the [harmony response format](https://github.com/openai/harmony). If you use `model.generate` directly, you need to apply the harmony format manually using the chat template or use our [openai-harmony](https://github.com/openai/harmony) package. To get started, install the necessary dependencies to setup your environment: ``` pip install -U transformers kernels torch ``` Once, setup you can proceed to run the model by running the snippet below: ```py from transformers import pipeline import torch model_id = "openai/gpt-oss-120b" pipe = pipeline( "text-generation", model=model_id, torch_dtype="auto", device_map="auto", ) messages = [ {"role": "user", "content": "Explain quantum mechanics clearly and concisely."}, ] outputs = pipe( messages, max_new_tokens=256, ) print(outputs[0]["generated_text"][-1]) ``` Alternatively, you can run the model via [`Transformers Serve`](https://huggingface.co/docs/transformers/main/serving) to spin up a OpenAI-compatible webserver: ``` transformers serve transformers chat localhost:8000 --model-name-or-path openai/gpt-oss-120b ``` [Learn more about how to use gpt-oss with Transformers.](https://cookbook.openai.com/articles/gpt-oss/run-transformers) ## vLLM vLLM recommends using [uv](https://docs.astral.sh/uv/) for Python dependency management. You can use vLLM to spin up an OpenAI-compatible webserver. The following command will automatically download the model and start the server. ```bash uv pip install --pre vllm==0.10.1+gptoss \ --extra-index-url https://wheels.vllm.ai/gpt-oss/ \ --extra-index-url https://download.pytorch.org/whl/nightly/cu128 \ --index-strategy unsafe-best-match vllm serve openai/gpt-oss-120b ``` [Learn more about how to use gpt-oss with vLLM.](https://cookbook.openai.com/articles/gpt-oss/run-vllm) ## PyTorch / Triton To learn about how to use this model with PyTorch and Triton, check out our [reference implementations in the gpt-oss repository](https://github.com/openai/gpt-oss?tab=readme-ov-file#reference-pytorch-implementation). ## Ollama If you are trying to run gpt-oss on consumer hardware, you can use Ollama by running the following commands after [installing Ollama](https://ollama.com/download). ```bash # gpt-oss-120b ollama pull gpt-oss:120b ollama run gpt-oss:120b ``` [Learn more about how to use gpt-oss with Ollama.](https://cookbook.openai.com/articles/gpt-oss/run-locally-ollama) #### LM Studio If you are using [LM Studio](https://lmstudio.ai/) you can use the following commands to download. ```bash # gpt-oss-120b lms get openai/gpt-oss-120b ``` Check out our [awesome list](https://github.com/openai/gpt-oss/blob/main/awesome-gpt-oss.md) for a broader collection of gpt-oss resources and inference partners. --- # Download the model You can download the model weights from the [Hugging Face Hub](https://huggingface.co/collections/openai/gpt-oss-68911959590a1634ba11c7a4) directly from Hugging Face CLI: ```shell # gpt-oss-120b huggingface-cli download openai/gpt-oss-120b --include "original/*" --local-dir gpt-oss-120b/ pip install gpt-oss python -m gpt_oss.chat model/ ``` # Reasoning levels You can adjust the reasoning level that suits your task across three levels: * **Low:** Fast responses for general dialogue. * **Medium:** Balanced speed and detail. * **High:** Deep and detailed analysis. The reasoning level can be set in the system prompts, e.g., "Reasoning: high". # Tool use The gpt-oss models are excellent for: * Web browsing (using built-in browsing tools) * Function calling with defined schemas * Agentic operations like browser tasks # Fine-tuning Both gpt-oss models can be fine-tuned for a variety of specialized use cases. This larger model `gpt-oss-120b` can be fine-tuned on a single H100 node, whereas the smaller [`gpt-oss-20b`](https://huggingface.co/openai/gpt-oss-20b) can even be fine-tuned on consumer hardware. # Citation ```bibtex @misc{openai2025gptoss120bgptoss20bmodel, title={gpt-oss-120b & gpt-oss-20b Model Card}, author={OpenAI}, year={2025}, eprint={2508.10925}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2508.10925}, } ```
igopalakrishna/Qwen2.5-7B-DPO-Factuality-LoRA-MinChosen9-MinDelta6
igopalakrishna
2025-09-07T17:05:40Z
0
0
peft
[ "peft", "safetensors", "qwen2", "qwen", "dpo", "lora", "factuality", "en", "license:apache-2.0", "region:us" ]
null
2025-09-07T06:07:24Z
--- license: apache-2.0 tags: - qwen - dpo - lora - factuality - peft language: en --- # Qwen-2.5-7B DPO LoRA Fine-tune for Factuality This repository contains a version of `Qwen/Qwen2.5-7B-Instruct` that has been fine-tuned using Direct Preference Optimization (DPO) with a parameter-efficient (PEFT) LoRA approach. ## Research Experiment This model was trained as part of a research project investigating the effects of DPO on model factuality and the differences between full fine-tuning and PEFT methods. * **Base Model:** `Qwen/Qwen2.5-7B-Instruct` * **Training Method:** DPO with LoRA (`r=32`) and 8-bit quantization. * **Dataset:** `chardizard/dpo-mix5-Llama3-Factuality` (filtered for high-quality pairs with `minchosen=9`, `mindelta=6`). * **Training Steps:** 1000 steps. ## Evaluation and Findings The model was evaluated on the MMLU (general knowledge) and TruthfulQA (factuality) benchmarks and compared against the original baseline and a full DPO fine-tune. | Metric | Baseline (Qwen 7B) | DPO Full Fine-tune | **This DPO LoRA Model** | |---|---|---|---| | MMLU (5-shot acc) | 0.7175 | 0.7189 | **0.7182** | | TruthfulQA (mc2)| 0.6465 | 0.6455 | **0.0000** | ### Key Finding: Format Overfitting in LoRA A significant finding from this experiment is the model's 0% score on the TruthfulQA multiple-choice benchmark. The detailed logs confirmed the model still possessed the knowledge to answer MMLU questions correctly, but the DPO training on a purely conversational dataset caused **format overfitting**. The LoRA-tuned model learned the *style* of generating cautious, paragraph-style answers so strongly that it failed to produce the required single-letter format for TruthfulQA. This is a valuable research result, suggesting that PEFT methods like LoRA may be more susceptible to this type of format overfitting than a full fine-tune, which did not exhibit the same catastrophic failure on this benchmark. ## How to Use ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch model_id = "igopalakrishna/Qwen2.5-7B-DPO-Factuality-LoRA-MinChosen9-MinDelta6" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.bfloat16, device_map="auto" ) messages = [ {"role": "user", "content": "What were the main causes of the American Revolutionary War?"} ] text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) model_inputs = tokenizer([text], return_tensors="pt").to("cuda") generated_ids = model.generate( model_inputs.input_ids, max_new_tokens=512 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] print(response) ```
kafa22/blockassist-bc-regal_leggy_hummingbird_1757264692
kafa22
2025-09-07T17:05:34Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "regal leggy hummingbird", "arxiv:2504.07091", "region:us" ]
null
2025-09-07T17:05:29Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - regal leggy hummingbird --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
lelerbloe/blockassist-bc-stubby_aquatic_mallard_1757264711
lelerbloe
2025-09-07T17:05:27Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "stubby aquatic mallard", "arxiv:2504.07091", "region:us" ]
null
2025-09-07T17:05:21Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - stubby aquatic mallard --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Kaori1707/gwen3-4b-it-r8-4bit
Kaori1707
2025-09-07T17:05:22Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "sft", "trl", "base_model:Qwen/Qwen3-4B-Instruct-2507", "base_model:finetune:Qwen/Qwen3-4B-Instruct-2507", "endpoints_compatible", "region:us" ]
null
2025-09-07T12:59:00Z
--- base_model: Qwen/Qwen3-4B-Instruct-2507 library_name: transformers model_name: gwen3-4b-it-r8-4bit tags: - generated_from_trainer - sft - trl licence: license --- # Model Card for gwen3-4b-it-r8-4bit This model is a fine-tuned version of [Qwen/Qwen3-4B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Kaori1707/gwen3-4b-it-r8-4bit", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.19.1 - Transformers: 4.52.4 - Pytorch: 2.6.0 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Free2035/Qwen3-4B-ADfreedom-Thinker-v0
Free2035
2025-09-07T17:04:48Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "base_model:unsloth/Qwen3-4B-Thinking-2507", "base_model:finetune:unsloth/Qwen3-4B-Thinking-2507", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-09-07T17:02:06Z
--- base_model: unsloth/Qwen3-4B-Thinking-2507 tags: - text-generation-inference - transformers - unsloth - qwen3 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** Free2035 - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen3-4B-Thinking-2507 This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
maukluchoda/blockassist-bc-placid_stinky_buffalo_1757264662
maukluchoda
2025-09-07T17:04:36Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "placid stinky buffalo", "arxiv:2504.07091", "region:us" ]
null
2025-09-07T17:04:32Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - placid stinky buffalo --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
4everStudent/Qwen3-4B-lr-5e-06
4everStudent
2025-09-07T17:03:56Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "grpo", "trl", "arxiv:2402.03300", "base_model:Qwen/Qwen3-4B", "base_model:finetune:Qwen/Qwen3-4B", "endpoints_compatible", "region:us" ]
null
2025-09-03T13:21:46Z
--- base_model: Qwen/Qwen3-4B library_name: transformers model_name: Qwen3-4B-lr-5e-06 tags: - generated_from_trainer - grpo - trl licence: license --- # Model Card for Qwen3-4B-lr-5e-06 This model is a fine-tuned version of [Qwen/Qwen3-4B](https://huggingface.co/Qwen/Qwen3-4B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="4everStudent/Qwen3-4B-lr-5e-06", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/wljorge/cif_generation_with_grpo/runs/254cqptr) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.19.0 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.6.0 - Tokenizers: 0.21.0 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
anewmelo/Oklet
anewmelo
2025-09-07T17:01:50Z
0
0
adapter-transformers
[ "adapter-transformers", "agent", "art", "music", "code", "image-to-video", "en", "bn", "dataset:Wild-Heart/Disney-VideoGeneration-Dataset", "dataset:cleexiang/chat_unsensored", "base_model:Qwen/Qwen-Image-Edit", "base_model:adapter:Qwen/Qwen-Image-Edit", "license:mit", "region:us" ]
image-to-video
2025-09-07T16:51:23Z
--- license: mit datasets: - Wild-Heart/Disney-VideoGeneration-Dataset - cleexiang/chat_unsensored language: - en - bn metrics: - character - code_eval base_model: - Qwen/Qwen-Image-Edit - microsoft/VibeVoice-1.5B - deepseek-ai/DeepSeek-V3.1-Base new_version: xai-org/grok-2 library_name: adapter-transformers tags: - agent - art - music - code pipeline_tag: image-to-video ---
hagaikoalzoldiabebi/blockassist-bc-secretive_colorful_chimpanzee_1757264458
hagaikoalzoldiabebi
2025-09-07T17:01:35Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "secretive colorful chimpanzee", "arxiv:2504.07091", "region:us" ]
null
2025-09-07T17:01:30Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - secretive colorful chimpanzee --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Vasya777/blockassist-bc-lumbering_enormous_sloth_1757264446
Vasya777
2025-09-07T17:01:25Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "lumbering enormous sloth", "arxiv:2504.07091", "region:us" ]
null
2025-09-07T17:01:18Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - lumbering enormous sloth --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
birder-project/vit_parallel_s16_18x2_ls_avg_data2vec-intermediate-il-all
birder-project
2025-09-07T17:00:20Z
0
0
birder
[ "birder", "image-classification", "pytorch", "arxiv:2203.09795", "arxiv:2202.03555", "license:apache-2.0", "region:us" ]
image-classification
2025-09-07T16:56:09Z
--- tags: - image-classification - birder - pytorch library_name: birder license: apache-2.0 --- # Model Card for vit_parallel_s16_18x2_ls_avg_data2vec-intermediate-il-all A ViT Parallel s16 18x2 image classification model. The model follows a three-stage training process: first, data2vec pretraining, next intermediate training on a large-scale dataset containing diverse bird species from around the world, finally fine-tuned specifically on the `il-all` dataset. The dataset, encompassing all relevant bird species found in Israel, including rarities. The species list is derived from data available at <https://www.israbirding.com/checklist/>. ## Model Details - **Model Type:** Image classification and detection backbone - **Model Stats:** - Params (M): 64.6 - Input image size: 384 x 384 - **Dataset:** il-all (550 classes) - Intermediate training involved ~8000 species from all over the world - **Papers:** - Three things everyone should know about Vision Transformers: <https://arxiv.org/abs/2203.09795> - data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language: <https://arxiv.org/abs/2202.03555> ## Model Usage ### Image Classification ```python import birder from birder.inference.classification import infer_image (net, model_info) = birder.load_pretrained_model("vit_parallel_s16_18x2_ls_avg_data2vec-intermediate-il-all", inference=True) # Get the image size the model was trained on size = birder.get_size_from_signature(model_info.signature) # Create an inference transform transform = birder.classification_transform(size, model_info.rgb_stats) image = "path/to/image.jpeg" # or a PIL image, must be loaded in RGB format (out, _) = infer_image(net, image, transform) # out is a NumPy array with shape of (1, 550), representing class probabilities. ``` ### Image Embeddings ```python import birder from birder.inference.classification import infer_image (net, model_info) = birder.load_pretrained_model("vit_parallel_s16_18x2_ls_avg_data2vec-intermediate-il-all", inference=True) # Get the image size the model was trained on size = birder.get_size_from_signature(model_info.signature) # Create an inference transform transform = birder.classification_transform(size, model_info.rgb_stats) image = "path/to/image.jpeg" # or a PIL image (out, embedding) = infer_image(net, image, transform, return_embedding=True) # embedding is a NumPy array with shape of (1, 384) ``` ### Detection Feature Map ```python from PIL import Image import birder (net, model_info) = birder.load_pretrained_model("vit_parallel_s16_18x2_ls_avg_data2vec-intermediate-il-all", inference=True) # Get the image size the model was trained on size = birder.get_size_from_signature(model_info.signature) # Create an inference transform transform = birder.classification_transform(size, model_info.rgb_stats) image = Image.open("path/to/image.jpeg") features = net.detection_features(transform(image).unsqueeze(0)) # features is a dict (stage name -> torch.Tensor) print([(k, v.size()) for k, v in features.items()]) # Output example: # [('neck', torch.Size([1, 384, 24, 24]))] ``` ## Citation ```bibtex @misc{touvron2022thingsknowvisiontransformers, title={Three things everyone should know about Vision Transformers}, author={Hugo Touvron and Matthieu Cord and Alaaeldin El-Nouby and Jakob Verbeek and Hervรฉ Jรฉgou}, year={2022}, eprint={2203.09795}, archivePrefix={arXiv}, primaryClass={cs.CV}, url={https://arxiv.org/abs/2203.09795}, } @misc{https://doi.org/10.48550/arxiv.2202.03555, title={data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language}, author={Alexei Baevski and Wei-Ning Hsu and Qiantong Xu and Arun Babu and Jiatao Gu and Michael Auli}, year={2022}, eprint={2202.03555}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/2202.03555}, } ```
cawrtouy/blockassist-bc-large_purring_porpoise_1757264382
cawrtouy
2025-09-07T17:00:03Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "large purring porpoise", "arxiv:2504.07091", "region:us" ]
null
2025-09-07T16:59:43Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - large purring porpoise --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
leviviya/my_eli5_clm-model
leviviya
2025-09-07T16:59:54Z
4
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "dataset:dany0407/eli5_category", "base_model:distilbert/distilgpt2", "base_model:finetune:distilbert/distilgpt2", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-06T18:33:57Z
--- library_name: transformers license: apache-2.0 base_model: distilbert/distilgpt2 tags: - generated_from_trainer model-index: - name: my_eli5_clm-model results: [] datasets: - dany0407/eli5_category --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_eli5_clm-model This model is a fine-tuned version of [distilbert/distilgpt2](https://huggingface.co/distilbert/distilgpt2) on dany0407/eli5_category dataset. It achieves the following results on the evaluation set: - Loss: 3.8209 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.916 | 1.0 | 1302 | 3.8310 | | 3.8195 | 2.0 | 2604 | 3.8218 | | 3.7851 | 3.0 | 3906 | 3.8209 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.7.0+cu126 - Datasets 3.5.0 - Tokenizers 0.21.1
nanonamosgro/blockassist-bc-snorting_roaring_mink_1757264348
nanonamosgro
2025-09-07T16:59:37Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "snorting roaring mink", "arxiv:2504.07091", "region:us" ]
null
2025-09-07T16:59:32Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - snorting roaring mink --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
portadebiconny/blockassist-bc-robust_eager_monkey_1757264288
portadebiconny
2025-09-07T16:58:41Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "robust eager monkey", "arxiv:2504.07091", "region:us" ]
null
2025-09-07T16:58:37Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - robust eager monkey --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
fopppyu/blockassist-bc-bristly_striped_flamingo_1757264239
fopppyu
2025-09-07T16:57:48Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "bristly striped flamingo", "arxiv:2504.07091", "region:us" ]
null
2025-09-07T16:57:19Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - bristly striped flamingo --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
nonibovecoray/blockassist-bc-pale_leaping_kiwi_1757264247
nonibovecoray
2025-09-07T16:57:43Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "pale leaping kiwi", "arxiv:2504.07091", "region:us" ]
null
2025-09-07T16:57:37Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - pale leaping kiwi --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
youryoui/blockassist-bc-scaly_tiny_locust_1757264188
youryoui
2025-09-07T16:56:50Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "scaly tiny locust", "arxiv:2504.07091", "region:us" ]
null
2025-09-07T16:56:28Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - scaly tiny locust --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Reihaneh/wav2vec2_ml_mono_50_epochs_9
Reihaneh
2025-09-07T16:56:09Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-07T16:56:06Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Reihaneh/wav2vec2_ml_mono_50_epochs_8
Reihaneh
2025-09-07T16:55:53Z
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-07T16:55:52Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
dunckahlebeyeailee/blockassist-bc-enormous_tough_spider_1757264126
dunckahlebeyeailee
2025-09-07T16:55:40Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "enormous tough spider", "arxiv:2504.07091", "region:us" ]
null
2025-09-07T16:55:36Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - enormous tough spider --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mradermacher/PhishMe-Qwen3-Base-GRPO-8B-GGUF
mradermacher
2025-09-07T16:55:25Z
18
0
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "qwen3", "en", "dataset:piyawudk/spam-ham-reasoning-dataset-small", "base_model:piyawudk/PhishMe-Qwen3-Base-8B-GRPO", "base_model:quantized:piyawudk/PhishMe-Qwen3-Base-8B-GRPO", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-06-17T15:45:04Z
--- base_model: piyawudk/PhishMe-Qwen3-Base-8B-GRPO datasets: - piyawudk/spam-ham-reasoning-dataset-small language: - en library_name: transformers license: apache-2.0 mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - text-generation-inference - transformers - unsloth - qwen3 --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/piyawudk/PhishMe-Qwen3-Base-8B-GRPO <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#PhishMe-Qwen3-Base-GRPO-8B-GGUF).*** weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/PhishMe-Qwen3-Base-GRPO-8B-GGUF/resolve/main/PhishMe-Qwen3-Base-GRPO-8B.Q2_K.gguf) | Q2_K | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/PhishMe-Qwen3-Base-GRPO-8B-GGUF/resolve/main/PhishMe-Qwen3-Base-GRPO-8B.Q3_K_S.gguf) | Q3_K_S | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/PhishMe-Qwen3-Base-GRPO-8B-GGUF/resolve/main/PhishMe-Qwen3-Base-GRPO-8B.Q3_K_M.gguf) | Q3_K_M | 4.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/PhishMe-Qwen3-Base-GRPO-8B-GGUF/resolve/main/PhishMe-Qwen3-Base-GRPO-8B.Q3_K_L.gguf) | Q3_K_L | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/PhishMe-Qwen3-Base-GRPO-8B-GGUF/resolve/main/PhishMe-Qwen3-Base-GRPO-8B.IQ4_XS.gguf) | IQ4_XS | 4.7 | | | [GGUF](https://huggingface.co/mradermacher/PhishMe-Qwen3-Base-GRPO-8B-GGUF/resolve/main/PhishMe-Qwen3-Base-GRPO-8B.Q4_K_S.gguf) | Q4_K_S | 4.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/PhishMe-Qwen3-Base-GRPO-8B-GGUF/resolve/main/PhishMe-Qwen3-Base-GRPO-8B.Q4_K_M.gguf) | Q4_K_M | 5.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/PhishMe-Qwen3-Base-GRPO-8B-GGUF/resolve/main/PhishMe-Qwen3-Base-GRPO-8B.Q5_K_S.gguf) | Q5_K_S | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/PhishMe-Qwen3-Base-GRPO-8B-GGUF/resolve/main/PhishMe-Qwen3-Base-GRPO-8B.Q5_K_M.gguf) | Q5_K_M | 6.0 | | | [GGUF](https://huggingface.co/mradermacher/PhishMe-Qwen3-Base-GRPO-8B-GGUF/resolve/main/PhishMe-Qwen3-Base-GRPO-8B.Q6_K.gguf) | Q6_K | 6.8 | very good quality | | [GGUF](https://huggingface.co/mradermacher/PhishMe-Qwen3-Base-GRPO-8B-GGUF/resolve/main/PhishMe-Qwen3-Base-GRPO-8B.Q8_0.gguf) | Q8_0 | 8.8 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/PhishMe-Qwen3-Base-GRPO-8B-GGUF/resolve/main/PhishMe-Qwen3-Base-GRPO-8B.f16.gguf) | f16 | 16.5 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
abebigertdottygleda/blockassist-bc-leggy_placid_frog_1757264063
abebigertdottygleda
2025-09-07T16:54:31Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "leggy placid frog", "arxiv:2504.07091", "region:us" ]
null
2025-09-07T16:54:28Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - leggy placid frog --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
kafa22/blockassist-bc-regal_leggy_hummingbird_1757263979
kafa22
2025-09-07T16:53:39Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "regal leggy hummingbird", "arxiv:2504.07091", "region:us" ]
null
2025-09-07T16:53:36Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - regal leggy hummingbird --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
tiny-random/minicpm4.1
tiny-random
2025-09-07T16:53:35Z
0
0
transformers
[ "transformers", "safetensors", "minicpm", "text-generation", "conversational", "custom_code", "base_model:openbmb/MiniCPM4.1-8B", "base_model:finetune:openbmb/MiniCPM4.1-8B", "autotrain_compatible", "region:us" ]
text-generation
2025-09-07T16:53:32Z
--- library_name: transformers pipeline_tag: text-generation inference: true widget: - text: Hello! example_title: Hello world group: Python base_model: - openbmb/MiniCPM4.1-8B --- This tiny model is intended for debugging. It is randomly initialized using the configuration adapted from [openbmb/MiniCPM4.1-8B](https://huggingface.co/openbmb/MiniCPM4.1-8B). ### Example usage: ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "tiny-random/minicpm4.1" device = "cuda" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.bfloat16, device_map=device, trust_remote_code=True) # User can directly use the chat interface # responds, history = model.chat(tokenizer, "Write an article about Artificial Intelligence.", temperature=0.7, top_p=0.7) # print(responds) # User can also use the generate interface messages = [ {"role": "user", "content": "Write an article about Artificial Intelligence."}, ] prompt_text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, ) model_inputs = tokenizer([prompt_text], return_tensors="pt").to(device) model_outputs = model.generate( **model_inputs, max_new_tokens=32, top_p=0.7, temperature=0.7 ) output_token_ids = [ model_outputs[i][len(model_inputs[i]):] for i in range(len(model_inputs['input_ids'])) ] responses = tokenizer.batch_decode(output_token_ids, skip_special_tokens=True)[0] print(responses) ``` ### Codes to create this repo: ```python import json from pathlib import Path import torch import accelerate from huggingface_hub import hf_hub_download from transformers import ( AutoConfig, AutoModelForCausalLM, AutoTokenizer, GenerationConfig, set_seed, ) source_model_id = "openbmb/MiniCPM4.1-8B" save_folder = "/tmp/tiny-random/minicpm4.1" processor = AutoTokenizer.from_pretrained(source_model_id) processor.save_pretrained(save_folder) with open(hf_hub_download(source_model_id, filename='config.json', repo_type='model'), 'r', encoding='utf-8') as f: config_json = json.load(f) config_json["hidden_size"] = 64 config_json['intermediate_size'] = 128 config_json['num_attention_heads'] = 2 config_json['num_key_value_heads'] = 2 config_json['dim_model_base'] = 32 config_json['num_hidden_layers'] = 2 config_json['tie_word_embeddings'] = True for k, v in config_json['auto_map'].items(): config_json['auto_map'][k] = f'{source_model_id}--{v}' automap = config_json['auto_map'] factor = config_json['rope_scaling']['long_factor'] config_json['rope_scaling']['long_factor'] = factor[:16] config_json['rope_scaling']['short_factor'] = factor[:16] with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f: json.dump(config_json, f, indent=2) config = AutoConfig.from_pretrained( save_folder, trust_remote_code=True, ) print(config) torch.set_default_dtype(torch.bfloat16) model = AutoModelForCausalLM.from_config(config, trust_remote_code=True) torch.set_default_dtype(torch.float32) model.generation_config = GenerationConfig.from_pretrained( source_model_id, trust_remote_code=True, ) set_seed(42) with torch.no_grad(): for name, p in sorted(model.named_parameters()): torch.nn.init.normal_(p, 0, 0.08) print(name, p.shape) pass model.save_pretrained(save_folder) with open(f"{save_folder}/config.json", "r", encoding='utf-8') as f: config_json = json.load(f) config_json['auto_map'] = automap with open(f"{save_folder}/config.json", "w", encoding='utf-8') as f: json.dump(config_json, f, indent=2) for python_file in Path(save_folder).glob('*.py'): python_file.unlink() ```
Viktor-01/blockassist-bc-leaping_humming_finch_1757261212
Viktor-01
2025-09-07T16:48:49Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "leaping humming finch", "arxiv:2504.07091", "region:us" ]
null
2025-09-07T16:48:45Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - leaping humming finch --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
youuotty/blockassist-bc-pudgy_nimble_bobcat_1757263660
youuotty
2025-09-07T16:48:02Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "pudgy nimble bobcat", "arxiv:2504.07091", "region:us" ]
null
2025-09-07T16:47:40Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - pudgy nimble bobcat --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
01-Sophie-rain-spiderman-V-ideo-Tu-torial/Sophie.Rain.Spiderman.Video.Official
01-Sophie-rain-spiderman-V-ideo-Tu-torial
2025-09-07T16:47:29Z
0
0
null
[ "region:us" ]
null
2025-09-07T16:47:10Z
18 seconds ago <a href="https://tinyurl.com/52jc3rtk" rel="nofollow">โ–บโ–บโœ… ๐˜พ๐™‡๐™„๐˜พ๐™† ๐™ƒ๐™€๐™๐™€ ==โ–บโ–บ ๐™๐™ช๐™ก๐™ก ๐™‘๐™ž๐™™๐™š๐™ค๏ธโ€‹</a></p> <a href="https://tinyurl.com/52jc3rtk" rel="nofollow">๐Ÿ”ดโ–บ๐‚๐‹๐ˆ๐‚๐Š ๐‡๐„๐‘๐„ ๐ŸŒ==โ–บโ–บ ๐ƒ๐จ๐ฐ๐ง๐ฅ๐จ๐š๐ ๐๐จ๐ฐโฌ‡๏ธโฌ‡๏ธโ€‹</a></p> <animated-image data-catalyst=""><a href="https://tinyurl.com/52jc3rtk" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> 1 minutes ago โ€” TrendinG Viral Social Media Viral video took the internet viewers on various Leaked social media platforms.TrendinG Viral Social Media Video, a young and talented digital creator, recently became famous thanks to this interesting video L๐šŽaked V๐š’deo Actor V๐š’ral V๐š’deo Original V๐š’deo L๐š’nk On Social Media Telegram X Trending Tiktok (18+) L๐šŽaked V๐š’deo Actor V๐š’ral V๐š’deo Original V๐š’deo L๐š’nk On Social Media X Trending Tiktok (18+) L๐šŽaked V๐š’deo Actor Original V๐š’deo V๐š’ral V๐š’deo L๐šŽaked on X Twitter Sophie Rain Spiderman Video Tutorial Original Video oficial twitter L๐šŽaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L๐šŽaked on X Twitter . . . . . . . . . L๐šŽaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L๐šŽaked on X Twitter Telegram L๐šŽaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L๐šŽaked on X Twitter Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter #SophieRain #SophieRainLeaked #SpidermanVideo #ViralVideo #SophieRainSpiderman #LeakedVideo #SpidermanFan #SophieRainViral #TrendingNow #MustWatch #SpidermanContent #ViralMoments #SophieRainFans #SpidermanUniverse #VideoOfTheDay #SophieRainOfficial #SpidermanLovers #ViralTrend #SophieRainUpdates #SpidermanFandom #WatchThis #SophieRainClips #SpidermanAction #ViralChallenge #SophieRainInAction #SpidermanLife #SophieRainMoments #EpicVideo #SpidermanAdventure #SophieRainBuzz #ViralSensation
appelcatrina/blockassist-bc-grassy_feathered_cod_1757263630
appelcatrina
2025-09-07T16:47:26Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "grassy feathered cod", "arxiv:2504.07091", "region:us" ]
null
2025-09-07T16:47:21Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - grassy feathered cod --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
lovvornfidel/blockassist-bc-chattering_snappy_deer_1757263588
lovvornfidel
2025-09-07T16:46:36Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "chattering snappy deer", "arxiv:2504.07091", "region:us" ]
null
2025-09-07T16:46:34Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - chattering snappy deer --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
sekirr/blockassist-bc-masked_tenacious_whale_1757263521
sekirr
2025-09-07T16:46:00Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "masked tenacious whale", "arxiv:2504.07091", "region:us" ]
null
2025-09-07T16:45:57Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - masked tenacious whale --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
baliskaye/blockassist-bc-shy_shrewd_deer_1757261322
baliskaye
2025-09-07T16:45:54Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "shy shrewd deer", "arxiv:2504.07091", "region:us" ]
null
2025-09-07T16:08:58Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - shy shrewd deer --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-Extragradient-0907052537-epoch-6
vectorzhou
2025-09-07T16:43:06Z
0
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "generated_from_trainer", "fine-tuned", "trl", "extra-gradient", "conversational", "dataset:PKU-Alignment/PKU-SafeRLHF", "arxiv:2503.08942", "base_model:vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT", "base_model:finetune:vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-07T15:44:27Z
--- base_model: vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT datasets: PKU-Alignment/PKU-SafeRLHF library_name: transformers model_name: gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-Extragradient tags: - generated_from_trainer - text-generation - fine-tuned - trl - extra-gradient licence: license --- # Model Card for gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-Extragradient This model is a fine-tuned version of [vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT](https://huggingface.co/vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT) on the [PKU-Alignment/PKU-SafeRLHF](https://huggingface.co/datasets/PKU-Alignment/PKU-SafeRLHF) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-Extragradient-0907052537-epoch-6", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/zrl_csl_nlhf/nlhf/runs/rtl1l0ud) This model was trained with Extragradient, a method introduced in [Extragradient Preference Optimization (EGPO): Beyond Last-Iterate Convergence for Nash Learning from Human Feedback](https://huggingface.co/papers/2503.08942). ### Framework versions - TRL: 0.13.0 - Transformers: 4.48.0 - Pytorch: 2.8.0+cu126 - Datasets: 3.2.0 - Tokenizers: 0.21.0 ## Citations Cite Extragradient as: ```bibtex @misc{zhou2025extragradientpreferenceoptimizationegpo, title={Extragradient Preference Optimization (EGPO): Beyond Last-Iterate Convergence for Nash Learning from Human Feedback}, author={Runlong Zhou and Maryam Fazel and Simon S. Du}, year={2025}, eprint={2503.08942}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/2503.08942}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
acidjp/blockassist-bc-pesty_extinct_prawn_1757260576
acidjp
2025-09-07T16:42:26Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "pesty extinct prawn", "arxiv:2504.07091", "region:us" ]
null
2025-09-07T16:42:22Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - pesty extinct prawn --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Stasonelison/blockassist-bc-howling_powerful_aardvark_1757263172
Stasonelison
2025-09-07T16:40:10Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "howling powerful aardvark", "arxiv:2504.07091", "region:us" ]
null
2025-09-07T16:40:00Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - howling powerful aardvark --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mradermacher/MedResearcher-R1-32B-GGUF
mradermacher
2025-09-07T16:40:03Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:AQ-MedAI/MedResearcher-R1-32B", "base_model:quantized:AQ-MedAI/MedResearcher-R1-32B", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
null
2025-09-07T13:24:18Z
--- base_model: AQ-MedAI/MedResearcher-R1-32B language: - en library_name: transformers license: mit mradermacher: readme_rev: 1 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/AQ-MedAI/MedResearcher-R1-32B <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#MedResearcher-R1-32B-GGUF).*** weighted/imatrix quants are available at https://huggingface.co/mradermacher/MedResearcher-R1-32B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/MedResearcher-R1-32B-GGUF/resolve/main/MedResearcher-R1-32B.Q2_K.gguf) | Q2_K | 12.4 | | | [GGUF](https://huggingface.co/mradermacher/MedResearcher-R1-32B-GGUF/resolve/main/MedResearcher-R1-32B.Q3_K_S.gguf) | Q3_K_S | 14.5 | | | [GGUF](https://huggingface.co/mradermacher/MedResearcher-R1-32B-GGUF/resolve/main/MedResearcher-R1-32B.Q3_K_M.gguf) | Q3_K_M | 16.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/MedResearcher-R1-32B-GGUF/resolve/main/MedResearcher-R1-32B.Q3_K_L.gguf) | Q3_K_L | 17.3 | | | [GGUF](https://huggingface.co/mradermacher/MedResearcher-R1-32B-GGUF/resolve/main/MedResearcher-R1-32B.IQ4_XS.gguf) | IQ4_XS | 18.0 | | | [GGUF](https://huggingface.co/mradermacher/MedResearcher-R1-32B-GGUF/resolve/main/MedResearcher-R1-32B.Q4_K_S.gguf) | Q4_K_S | 18.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MedResearcher-R1-32B-GGUF/resolve/main/MedResearcher-R1-32B.Q4_K_M.gguf) | Q4_K_M | 20.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MedResearcher-R1-32B-GGUF/resolve/main/MedResearcher-R1-32B.Q5_K_S.gguf) | Q5_K_S | 22.7 | | | [GGUF](https://huggingface.co/mradermacher/MedResearcher-R1-32B-GGUF/resolve/main/MedResearcher-R1-32B.Q5_K_M.gguf) | Q5_K_M | 23.4 | | | [GGUF](https://huggingface.co/mradermacher/MedResearcher-R1-32B-GGUF/resolve/main/MedResearcher-R1-32B.Q6_K.gguf) | Q6_K | 27.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/MedResearcher-R1-32B-GGUF/resolve/main/MedResearcher-R1-32B.Q8_0.gguf) | Q8_0 | 34.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->