modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-06 06:27:01
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
542 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-06 06:26:44
card
stringlengths
11
1.01M
nargesgholami/SED_per
nargesgholami
2025-09-06T05:38:05Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-05T19:25:02Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
bah63843/blockassist-bc-plump_fast_antelope_1757137031
bah63843
2025-09-06T05:37:54Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "plump fast antelope", "arxiv:2504.07091", "region:us" ]
null
2025-09-06T05:37:50Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - plump fast antelope --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
fakir22/blockassist-bc-flapping_peaceful_caterpillar_1757137008
fakir22
2025-09-06T05:37:29Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "flapping peaceful caterpillar", "arxiv:2504.07091", "region:us" ]
null
2025-09-06T05:37:25Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - flapping peaceful caterpillar --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mradermacher/DynaGuard-1.7B-i1-GGUF
mradermacher
2025-09-06T05:37:26Z
0
0
null
[ "gguf", "region:us" ]
null
2025-09-06T05:37:10Z
<!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> <!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> weighted/imatrix quants of https://huggingface.co/tomg-group-umd/DynaGuard-1.7B
rayonlabs/DeepSeek-R1-Distill-Qwen-32B-333f675082e2a616_dataset-53c5301a-b421-4a48-adb5-cf0b452219ab
rayonlabs
2025-09-06T05:36:47Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-32B", "base_model:adapter:deepseek-ai/DeepSeek-R1-Distill-Qwen-32B", "region:us" ]
null
2025-09-06T05:36:47Z
--- base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-32B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.1
Chatecter/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-wiry_beaked_donkey
Chatecter
2025-09-06T05:34:43Z
10
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am wiry_beaked_donkey", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-03T09:49:18Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am wiry_beaked_donkey --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kismunah/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-robust_tame_zebra
kismunah
2025-09-06T05:33:19Z
45
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am robust tame zebra", "trl", "genrl-swarm", "I am robust_tame_zebra", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-04-24T15:26:57Z
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-robust_tame_zebra tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am robust tame zebra - trl - genrl-swarm - I am robust_tame_zebra licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-robust_tame_zebra This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="kismunah/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-robust_tame_zebra", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.5.1 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
mradermacher/FranFran-Something-12B-GGUF
mradermacher
2025-09-06T05:31:30Z
0
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:grimjim/FranFran-Something-12B", "base_model:quantized:grimjim/FranFran-Something-12B", "endpoints_compatible", "region:us", "conversational" ]
null
2025-09-06T05:15:04Z
--- base_model: grimjim/FranFran-Something-12B language: - en library_name: transformers mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/grimjim/FranFran-Something-12B <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#FranFran-Something-12B-GGUF).*** weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/FranFran-Something-12B-GGUF/resolve/main/FranFran-Something-12B.Q2_K.gguf) | Q2_K | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/FranFran-Something-12B-GGUF/resolve/main/FranFran-Something-12B.Q3_K_S.gguf) | Q3_K_S | 5.6 | | | [GGUF](https://huggingface.co/mradermacher/FranFran-Something-12B-GGUF/resolve/main/FranFran-Something-12B.Q3_K_M.gguf) | Q3_K_M | 6.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/FranFran-Something-12B-GGUF/resolve/main/FranFran-Something-12B.Q3_K_L.gguf) | Q3_K_L | 6.7 | | | [GGUF](https://huggingface.co/mradermacher/FranFran-Something-12B-GGUF/resolve/main/FranFran-Something-12B.Q4_K_S.gguf) | Q4_K_S | 7.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/FranFran-Something-12B-GGUF/resolve/main/FranFran-Something-12B.Q4_K_M.gguf) | Q4_K_M | 7.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/FranFran-Something-12B-GGUF/resolve/main/FranFran-Something-12B.Q5_K_S.gguf) | Q5_K_S | 8.6 | | | [GGUF](https://huggingface.co/mradermacher/FranFran-Something-12B-GGUF/resolve/main/FranFran-Something-12B.Q5_K_M.gguf) | Q5_K_M | 8.8 | | | [GGUF](https://huggingface.co/mradermacher/FranFran-Something-12B-GGUF/resolve/main/FranFran-Something-12B.Q6_K.gguf) | Q6_K | 10.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/FranFran-Something-12B-GGUF/resolve/main/FranFran-Something-12B.Q8_0.gguf) | Q8_0 | 13.1 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
armanhossain4047/mistral-finetuned-alpaca
armanhossain4047
2025-09-06T05:31:22Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:meta-llama/Llama-3.1-8B-Instruct", "base_model:finetune:meta-llama/Llama-3.1-8B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-09-01T11:49:51Z
--- base_model: meta-llama/Meta-Llama-3.1-8B-Instruct library_name: transformers model_name: mistral-finetuned-alpaca tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for mistral-finetuned-alpaca This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="armanhossain4047/mistral-finetuned-alpaca", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/221002624-green-university-of-bangladesh/Fine-tune%20Llama%203.2%203B%20Instruct%20on%20Fertilizer%20Recomendation%20/runs/jkwh0kvi?apiKey=640c6cd5810de29cd1baaf8554885f941f706a3d) This model was trained with SFT. ### Framework versions - TRL: 0.22.2 - Transformers: 4.56.1 - Pytorch: 2.6.0+cu124 - Datasets: 4.0.0 - Tokenizers: 0.22.0 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
khangnguyen1287/blockassist-bc-gliding_sneaky_cougar_1757136530
khangnguyen1287
2025-09-06T05:30:38Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "gliding sneaky cougar", "arxiv:2504.07091", "region:us" ]
null
2025-09-06T05:30:12Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - gliding sneaky cougar --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Miracle-man/blockassist-bc-singing_lithe_koala_1757134615
Miracle-man
2025-09-06T05:27:41Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "singing lithe koala", "arxiv:2504.07091", "region:us" ]
null
2025-09-06T05:27:37Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - singing lithe koala --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
AnerYubo/blockassist-bc-elusive_mammalian_termite_1757136455
AnerYubo
2025-09-06T05:27:38Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "elusive mammalian termite", "arxiv:2504.07091", "region:us" ]
null
2025-09-06T05:27:36Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - elusive mammalian termite --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
bah63843/blockassist-bc-plump_fast_antelope_1757136391
bah63843
2025-09-06T05:27:19Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "plump fast antelope", "arxiv:2504.07091", "region:us" ]
null
2025-09-06T05:27:11Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - plump fast antelope --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
kainatq/kangaroo_7B_test01
kainatq
2025-09-06T05:26:23Z
0
0
null
[ "merge", "mergekit", "lazymergekit", "Weyaxi/MetaMath-OpenHermes-2.5-neural-chat-7b-v3-1-7B-Linear", "icefog72/IceMoonshineRP-7b", "Weyaxi/MetaMath-neural-chat-7b-v3-2-Slerp", "VAGOsolutions/SauerkrautLM-7b-HerO", "mrfakename/NeuralOrca-7B-v1", "base_model:VAGOsolutions/SauerkrautLM-7b-HerO", "base_model:merge:VAGOsolutions/SauerkrautLM-7b-HerO", "base_model:Weyaxi/MetaMath-OpenHermes-2.5-neural-chat-7b-v3-1-7B-Linear", "base_model:merge:Weyaxi/MetaMath-OpenHermes-2.5-neural-chat-7b-v3-1-7B-Linear", "base_model:Weyaxi/MetaMath-neural-chat-7b-v3-2-Slerp", "base_model:merge:Weyaxi/MetaMath-neural-chat-7b-v3-2-Slerp", "base_model:icefog72/IceMoonshineRP-7b", "base_model:merge:icefog72/IceMoonshineRP-7b", "base_model:mrfakename/NeuralOrca-7B-v1", "base_model:merge:mrfakename/NeuralOrca-7B-v1", "region:us" ]
null
2025-09-06T05:26:22Z
--- base_model: - Weyaxi/MetaMath-OpenHermes-2.5-neural-chat-7b-v3-1-7B-Linear - icefog72/IceMoonshineRP-7b - Weyaxi/MetaMath-neural-chat-7b-v3-2-Slerp - VAGOsolutions/SauerkrautLM-7b-HerO - mrfakename/NeuralOrca-7B-v1 tags: - merge - mergekit - lazymergekit - Weyaxi/MetaMath-OpenHermes-2.5-neural-chat-7b-v3-1-7B-Linear - icefog72/IceMoonshineRP-7b - Weyaxi/MetaMath-neural-chat-7b-v3-2-Slerp - VAGOsolutions/SauerkrautLM-7b-HerO - mrfakename/NeuralOrca-7B-v1 --- # kangaroo_7B_test01 kangaroo_7B_test01 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [Weyaxi/MetaMath-OpenHermes-2.5-neural-chat-7b-v3-1-7B-Linear](https://huggingface.co/Weyaxi/MetaMath-OpenHermes-2.5-neural-chat-7b-v3-1-7B-Linear) * [icefog72/IceMoonshineRP-7b](https://huggingface.co/icefog72/IceMoonshineRP-7b) * [Weyaxi/MetaMath-neural-chat-7b-v3-2-Slerp](https://huggingface.co/Weyaxi/MetaMath-neural-chat-7b-v3-2-Slerp) * [VAGOsolutions/SauerkrautLM-7b-HerO](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-HerO) * [mrfakename/NeuralOrca-7B-v1](https://huggingface.co/mrfakename/NeuralOrca-7B-v1) ## 🧩 Configuration ```yaml models: - model: BioMistral/BioMistral-7B-DARE # No parameters necessary for base model - model: Weyaxi/MetaMath-OpenHermes-2.5-neural-chat-7b-v3-1-7B-Linear parameters: density: 0.5 weight: 0.2 - model: icefog72/IceMoonshineRP-7b parameters: density: 0.5 weight: 0.2 - model: Weyaxi/MetaMath-neural-chat-7b-v3-2-Slerp parameters: density: 0.5 weight: 0.2 - model: VAGOsolutions/SauerkrautLM-7b-HerO parameters: density: 0.5 weight: 0.2 - model: mrfakename/NeuralOrca-7B-v1 parameters: density: 0.5 weight: 0.2 merge_method: dare_ties base_model: BioMistral/BioMistral-7B-DARE parameters: int8_mask: true dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "kainatq/kangaroo_7B_test01" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
vwzyrraz7l/blockassist-bc-tall_hunting_vulture_1757134761
vwzyrraz7l
2025-09-06T05:24:49Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "tall hunting vulture", "arxiv:2504.07091", "region:us" ]
null
2025-09-06T05:24:45Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - tall hunting vulture --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
koloni/blockassist-bc-deadly_graceful_stingray_1757134722
koloni
2025-09-06T05:24:04Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "deadly graceful stingray", "arxiv:2504.07091", "region:us" ]
null
2025-09-06T05:24:01Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - deadly graceful stingray --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
cike-dev/GemmaOffensiveClassifier
cike-dev
2025-09-06T05:23:51Z
0
0
transformers
[ "transformers", "safetensors", "gemma3_text", "text-generation", "generated_from_trainer", "sft", "trl", "conversational", "base_model:google/gemma-3-270m-it", "base_model:finetune:google/gemma-3-270m-it", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-06T05:11:01Z
--- base_model: google/gemma-3-270m-it library_name: transformers model_name: GemmaOffensiveClassifier tags: - generated_from_trainer - sft - trl licence: license --- # Model Card for GemmaOffensiveClassifier This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="cike-dev/GemmaOffensiveClassifier", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.22.2 - Transformers: 4.56.0 - Pytorch: 2.8.0+cu126 - Datasets: 4.0.0 - Tokenizers: 0.22.0 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
chansung/Qwen3-4B-CCRL-CUR-VAR-ASCE-NORMAL-8K-1E
chansung
2025-09-06T05:22:19Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "generated_from_trainer", "open-r1", "trl", "grpo", "conversational", "dataset:chansung/verifiable-coding-problems-python-v2", "arxiv:2402.03300", "base_model:Qwen/Qwen3-4B-Instruct-2507", "base_model:finetune:Qwen/Qwen3-4B-Instruct-2507", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-01T18:29:21Z
--- base_model: Qwen/Qwen3-4B-Instruct-2507 datasets: chansung/verifiable-coding-problems-python-v2 library_name: transformers model_name: Qwen3-4B-CCRL-CUR-VAR-ASCE-NORMAL-8K-1E tags: - generated_from_trainer - open-r1 - trl - grpo licence: license --- # Model Card for Qwen3-4B-CCRL-CUR-VAR-ASCE-NORMAL-8K-1E This model is a fine-tuned version of [Qwen/Qwen3-4B-Instruct-2507](https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507) on the [chansung/verifiable-coding-problems-python-v2](https://huggingface.co/datasets/chansung/verifiable-coding-problems-python-v2) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="chansung/Qwen3-4B-CCRL-CUR-VAR-ASCE-NORMAL-8K-1E", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chansung18/huggingface/runs/fm01y88b) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.18.0.dev0 - Transformers: 4.52.0.dev0 - Pytorch: 2.6.0 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
bah63843/blockassist-bc-plump_fast_antelope_1757136035
bah63843
2025-09-06T05:21:27Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "plump fast antelope", "arxiv:2504.07091", "region:us" ]
null
2025-09-06T05:21:19Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - plump fast antelope --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Viktor-01/blockassist-bc-leaping_humming_finch_1757133401
Viktor-01
2025-09-06T05:19:54Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "leaping humming finch", "arxiv:2504.07091", "region:us" ]
null
2025-09-06T05:19:50Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - leaping humming finch --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Kronze/llama-3.2-finetuned-version2
Kronze
2025-09-06T05:19:46Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-10-07T09:51:35Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
fakir22/blockassist-bc-flapping_peaceful_caterpillar_1757135936
fakir22
2025-09-06T05:19:36Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "flapping peaceful caterpillar", "arxiv:2504.07091", "region:us" ]
null
2025-09-06T05:19:33Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - flapping peaceful caterpillar --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
luckeciano/Qwen-2.5-7B-GRPO-Adam-FisherMaskToken-1e-4-v2_3172
luckeciano
2025-09-06T05:19:23Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "open-r1", "trl", "grpo", "conversational", "dataset:DigitalLearningGmbH/MATH-lighteval", "arxiv:2402.03300", "base_model:Qwen/Qwen2.5-Math-7B", "base_model:finetune:Qwen/Qwen2.5-Math-7B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-06T01:23:22Z
--- base_model: Qwen/Qwen2.5-Math-7B datasets: DigitalLearningGmbH/MATH-lighteval library_name: transformers model_name: Qwen-2.5-7B-GRPO-Adam-FisherMaskToken-1e-4-v2_3172 tags: - generated_from_trainer - open-r1 - trl - grpo licence: license --- # Model Card for Qwen-2.5-7B-GRPO-Adam-FisherMaskToken-1e-4-v2_3172 This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-GRPO-Adam-FisherMaskToken-1e-4-v2_3172", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/kn9oz3he) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.16.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.5.1 - Datasets: 3.4.1 - Tokenizers: 0.21.2 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
user074/grpo_qwen3b_composer_randomseed_32
user074
2025-09-06T05:18:15Z
0
0
null
[ "safetensors", "qwen2", "text-generation", "conversational", "en", "arxiv:2407.10671", "license:other", "region:us" ]
text-generation
2025-09-06T05:16:26Z
--- license: other license_name: qwen-research license_link: https://huggingface.co/Qwen/Qwen2.5-3B/blob/main/LICENSE language: - en pipeline_tag: text-generation --- # Qwen2.5-3B ## Introduction Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2: - Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains. - Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots. - **Long-context Support** up to 128K tokens and can generate up to 8K tokens. - **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more. **This repo contains the base 3B Qwen2.5 model**, which has the following features: - Type: Causal Language Models - Training Stage: Pretraining - Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings - Number of Parameters: 3.09B - Number of Paramaters (Non-Embedding): 2.77B - Number of Layers: 36 - Number of Attention Heads (GQA): 16 for Q and 2 for KV - Context Length: Full 32,768 tokens **We do not recommend using base language models for conversations.** Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., on this model. For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/). ## Requirements The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`. With `transformers<4.37.0`, you will encounter the following error: ``` KeyError: 'qwen2' ``` ## Evaluation & Performance Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5/). For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html). ## Citation If you find our work helpful, feel free to give us a cite. ``` @misc{qwen2.5, title = {Qwen2.5: A Party of Foundation Models}, url = {https://qwenlm.github.io/blog/qwen2.5/}, author = {Qwen Team}, month = {September}, year = {2024} } @article{qwen2, title={Qwen2 Technical Report}, author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan}, journal={arXiv preprint arXiv:2407.10671}, year={2024} } ```
altamisatmaja/minbox
altamisatmaja
2025-09-06T05:17:40Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-09-06T05:17:40Z
--- license: apache-2.0 ---
CHRISPI09/blockassist-bc-galloping_thick_tuna_1757135812
CHRISPI09
2025-09-06T05:17:28Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "galloping thick tuna", "arxiv:2504.07091", "region:us" ]
null
2025-09-06T05:17:24Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - galloping thick tuna --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mradermacher/NPO-SAM-WMDP-llama3-8b-instruct-GGUF
mradermacher
2025-09-06T05:17:10Z
0
0
transformers
[ "transformers", "gguf", "unlearn", "machine-unlearning", "llm-unlearning", "data-privacy", "large-language-models", "trustworthy-ai", "trustworthy-machine-learning", "language-model", "en", "dataset:cais/wmdp", "base_model:OPTML-Group/NPO-SAM-WMDP-llama3-8b-instruct", "base_model:quantized:OPTML-Group/NPO-SAM-WMDP-llama3-8b-instruct", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
null
2025-09-05T16:41:24Z
--- base_model: OPTML-Group/NPO-SAM-WMDP-llama3-8b-instruct datasets: - cais/wmdp language: - en library_name: transformers license: mit mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - unlearn - machine-unlearning - llm-unlearning - data-privacy - large-language-models - trustworthy-ai - trustworthy-machine-learning - language-model --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/OPTML-Group/NPO-SAM-WMDP-llama3-8b-instruct <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#NPO-SAM-WMDP-llama3-8b-instruct-GGUF).*** weighted/imatrix quants are available at https://huggingface.co/mradermacher/NPO-SAM-WMDP-llama3-8b-instruct-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/NPO-SAM-WMDP-llama3-8b-instruct-GGUF/resolve/main/NPO-SAM-WMDP-llama3-8b-instruct.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/NPO-SAM-WMDP-llama3-8b-instruct-GGUF/resolve/main/NPO-SAM-WMDP-llama3-8b-instruct.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/NPO-SAM-WMDP-llama3-8b-instruct-GGUF/resolve/main/NPO-SAM-WMDP-llama3-8b-instruct.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/NPO-SAM-WMDP-llama3-8b-instruct-GGUF/resolve/main/NPO-SAM-WMDP-llama3-8b-instruct.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/NPO-SAM-WMDP-llama3-8b-instruct-GGUF/resolve/main/NPO-SAM-WMDP-llama3-8b-instruct.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/NPO-SAM-WMDP-llama3-8b-instruct-GGUF/resolve/main/NPO-SAM-WMDP-llama3-8b-instruct.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/NPO-SAM-WMDP-llama3-8b-instruct-GGUF/resolve/main/NPO-SAM-WMDP-llama3-8b-instruct.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/NPO-SAM-WMDP-llama3-8b-instruct-GGUF/resolve/main/NPO-SAM-WMDP-llama3-8b-instruct.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/NPO-SAM-WMDP-llama3-8b-instruct-GGUF/resolve/main/NPO-SAM-WMDP-llama3-8b-instruct.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/NPO-SAM-WMDP-llama3-8b-instruct-GGUF/resolve/main/NPO-SAM-WMDP-llama3-8b-instruct.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/NPO-SAM-WMDP-llama3-8b-instruct-GGUF/resolve/main/NPO-SAM-WMDP-llama3-8b-instruct.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/NPO-SAM-WMDP-llama3-8b-instruct-GGUF/resolve/main/NPO-SAM-WMDP-llama3-8b-instruct.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
0xfani/blockassist-bc-tangled_bellowing_crab_1757134125
0xfani
2025-09-06T05:15:48Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "tangled bellowing crab", "arxiv:2504.07091", "region:us" ]
null
2025-09-06T05:15:26Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - tangled bellowing crab --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
bah63843/blockassist-bc-plump_fast_antelope_1757135646
bah63843
2025-09-06T05:15:06Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "plump fast antelope", "arxiv:2504.07091", "region:us" ]
null
2025-09-06T05:14:46Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - plump fast antelope --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
StormblessedKal/data-multilingual
StormblessedKal
2025-09-06T05:12:47Z
0
0
null
[ "license:bsd-2-clause", "region:us" ]
null
2025-08-31T12:30:03Z
--- license: bsd-2-clause ---
MohamedAhmedAE/Llama-3.1-8B-Instruct-Medical-Finetune-v4
MohamedAhmedAE
2025-09-06T05:09:26Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:meta-llama/Llama-3.1-8B-Instruct", "base_model:finetune:meta-llama/Llama-3.1-8B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-09-04T13:41:11Z
--- base_model: meta-llama/Llama-3.1-8B-Instruct library_name: transformers model_name: Llama-3.1-8B-Instruct-Medical-Finetune-v4 tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for Llama-3.1-8B-Instruct-Medical-Finetune-v4 This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="MohamedAhmedAE/Llama-3.1-8B-Instruct-Medical-Finetune-v4", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/mohamed-ahmed/Llama-3.1-8B-Instruct-Medical-Finetune-v4/runs/etaxu32m) This model was trained with SFT. ### Framework versions - TRL: 0.21.0 - Transformers: 4.55.4 - Pytorch: 2.8.0 - Datasets: 3.6.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
viatlov/blockassist-bc-masked_amphibious_donkey_1757135176
viatlov
2025-09-06T05:08:07Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "masked amphibious donkey", "arxiv:2504.07091", "region:us" ]
null
2025-09-06T05:07:32Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - masked amphibious donkey --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
fakir22/blockassist-bc-flapping_peaceful_caterpillar_1757135220
fakir22
2025-09-06T05:07:42Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "flapping peaceful caterpillar", "arxiv:2504.07091", "region:us" ]
null
2025-09-06T05:07:37Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - flapping peaceful caterpillar --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
allytopic/base-phi3-mini-128k-16bit
allytopic
2025-09-06T05:07:04Z
0
0
null
[ "safetensors", "phi3", "custom_code", "license:apache-2.0", "region:us" ]
null
2025-09-06T05:02:19Z
--- license: apache-2.0 ---
leonMW/DeepSeek-R1-Distill-Qwen-7B-LORA-GSPO-Basic
leonMW
2025-09-06T05:05:16Z
38
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "trl", "grpo", "conversational", "arxiv:2402.03300", "base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-7B", "base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Qwen-7B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-28T16:26:26Z
--- base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-7B library_name: transformers model_name: DeepSeek-R1-Distill-Qwen-7B-LORA-GSPO-Basic tags: - generated_from_trainer - trl - grpo licence: license --- # Model Card for DeepSeek-R1-Distill-Qwen-7B-LORA-GSPO-Basic This model is a fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Qwen-7B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="leonMW/DeepSeek-R1-Distill-Qwen-7B-LORA-GSPO-Basic", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/leonwenderoth-tu-darmstadt/huggingface/runs/pnardeqm) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.22.1 - Transformers: 4.56.0 - Pytorch: 2.7.1 - Datasets: 4.0.0 - Tokenizers: 0.22.0 ## Citations Cite GRPO as: ```bibtex @article{shao2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
ucmp137538/OpenR1-Distill-7B
ucmp137538
2025-09-06T05:04:03Z
3
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "open-r1", "trl", "sft", "conversational", "dataset:ZJU-REAL/InftyThink", "base_model:Qwen/Qwen2.5-Math-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-Math-7B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-04T16:30:32Z
--- base_model: Qwen/Qwen2.5-Math-7B-Instruct datasets: ZJU-REAL/InftyThink library_name: transformers model_name: OpenR1-Distill-7B tags: - generated_from_trainer - open-r1 - trl - sft licence: license --- # Model Card for OpenR1-Distill-7B This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Math-7B-Instruct) on the [ZJU-REAL/InftyThink](https://huggingface.co/datasets/ZJU-REAL/InftyThink) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="ucmp137538/OpenR1-Distill-7B", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/mingzeli/infinitythink/runs/bqpdmpdz) This model was trained with SFT. ### Framework versions - TRL: 0.18.0 - Transformers: 4.54.1 - Pytorch: 2.7.0 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
nannnzk/task-13-halo
nannnzk
2025-09-06T05:03:20Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:microsoft/Phi-4-mini-instruct", "base_model:adapter:microsoft/Phi-4-mini-instruct", "region:us" ]
null
2025-09-06T05:03:13Z
--- base_model: microsoft/Phi-4-mini-instruct library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.2
bah63843/blockassist-bc-plump_fast_antelope_1757134919
bah63843
2025-09-06T05:03:09Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "plump fast antelope", "arxiv:2504.07091", "region:us" ]
null
2025-09-06T05:03:01Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - plump fast antelope --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Alex1774/Fine_tuned_model
Alex1774
2025-09-06T05:02:45Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-09-06T05:02:39Z
--- base_model: unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Alex1774 - **License:** apache-2.0 - **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
david3621/blockassist-bc-gentle_meek_cat_1757133856
david3621
2025-09-06T05:00:39Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "gentle meek cat", "arxiv:2504.07091", "region:us" ]
null
2025-09-06T04:59:26Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - gentle meek cat --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
encoderrr/blockassist-bc-sturdy_alert_mammoth_1757134058
encoderrr
2025-09-06T05:00:36Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "sturdy alert mammoth", "arxiv:2504.07091", "region:us" ]
null
2025-09-06T05:00:30Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - sturdy alert mammoth --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mradermacher/MiniCPM4.1-8B-i1-GGUF
mradermacher
2025-09-06T04:59:07Z
0
0
transformers
[ "transformers", "gguf", "zh", "en", "base_model:openbmb/MiniCPM4.1-8B", "base_model:quantized:openbmb/MiniCPM4.1-8B", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2025-09-06T03:47:45Z
--- base_model: openbmb/MiniCPM4.1-8B language: - zh - en library_name: transformers license: apache-2.0 mradermacher: readme_rev: 1 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> <!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> weighted/imatrix quants of https://huggingface.co/openbmb/MiniCPM4.1-8B <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#MiniCPM4.1-8B-i1-GGUF).*** static quants are available at https://huggingface.co/mradermacher/MiniCPM4.1-8B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/MiniCPM4.1-8B-i1-GGUF/resolve/main/MiniCPM4.1-8B.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) | | [GGUF](https://huggingface.co/mradermacher/MiniCPM4.1-8B-i1-GGUF/resolve/main/MiniCPM4.1-8B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/MiniCPM4.1-8B-i1-GGUF/resolve/main/MiniCPM4.1-8B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/MiniCPM4.1-8B-i1-GGUF/resolve/main/MiniCPM4.1-8B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | | | [GGUF](https://huggingface.co/mradermacher/MiniCPM4.1-8B-i1-GGUF/resolve/main/MiniCPM4.1-8B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | | | [GGUF](https://huggingface.co/mradermacher/MiniCPM4.1-8B-i1-GGUF/resolve/main/MiniCPM4.1-8B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/MiniCPM4.1-8B-i1-GGUF/resolve/main/MiniCPM4.1-8B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/MiniCPM4.1-8B-i1-GGUF/resolve/main/MiniCPM4.1-8B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.0 | very low quality | | [GGUF](https://huggingface.co/mradermacher/MiniCPM4.1-8B-i1-GGUF/resolve/main/MiniCPM4.1-8B.i1-Q2_K.gguf) | i1-Q2_K | 3.2 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/MiniCPM4.1-8B-i1-GGUF/resolve/main/MiniCPM4.1-8B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/MiniCPM4.1-8B-i1-GGUF/resolve/main/MiniCPM4.1-8B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/MiniCPM4.1-8B-i1-GGUF/resolve/main/MiniCPM4.1-8B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.7 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/MiniCPM4.1-8B-i1-GGUF/resolve/main/MiniCPM4.1-8B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.7 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/MiniCPM4.1-8B-i1-GGUF/resolve/main/MiniCPM4.1-8B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/MiniCPM4.1-8B-i1-GGUF/resolve/main/MiniCPM4.1-8B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/MiniCPM4.1-8B-i1-GGUF/resolve/main/MiniCPM4.1-8B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/MiniCPM4.1-8B-i1-GGUF/resolve/main/MiniCPM4.1-8B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/MiniCPM4.1-8B-i1-GGUF/resolve/main/MiniCPM4.1-8B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.8 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/MiniCPM4.1-8B-i1-GGUF/resolve/main/MiniCPM4.1-8B.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/MiniCPM4.1-8B-i1-GGUF/resolve/main/MiniCPM4.1-8B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/MiniCPM4.1-8B-i1-GGUF/resolve/main/MiniCPM4.1-8B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MiniCPM4.1-8B-i1-GGUF/resolve/main/MiniCPM4.1-8B.i1-Q4_1.gguf) | i1-Q4_1 | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/MiniCPM4.1-8B-i1-GGUF/resolve/main/MiniCPM4.1-8B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/MiniCPM4.1-8B-i1-GGUF/resolve/main/MiniCPM4.1-8B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.9 | | | [GGUF](https://huggingface.co/mradermacher/MiniCPM4.1-8B-i1-GGUF/resolve/main/MiniCPM4.1-8B.i1-Q6_K.gguf) | i1-Q6_K | 6.8 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
LE1X1N/ppo-LunarLander-v3
LE1X1N
2025-09-06T04:58:37Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v3", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-09-06T04:55:56Z
--- library_name: stable-baselines3 tags: - LunarLander-v3 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v3 type: LunarLander-v3 metrics: - type: mean_reward value: 256.51 +/- 40.64 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v3** This is a trained model of a **PPO** agent playing **LunarLander-v3** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
qgallouedec/Qwen3-32B-SFT-20250905210756
qgallouedec
2025-09-06T04:55:54Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "hf_jobs", "sft", "trl", "dataset:trl-lib/Capybara", "base_model:Qwen/Qwen3-32B", "base_model:finetune:Qwen/Qwen3-32B", "endpoints_compatible", "region:us" ]
null
2025-09-05T21:09:46Z
--- base_model: Qwen/Qwen3-32B datasets: trl-lib/Capybara library_name: transformers model_name: Qwen3-32B-SFT-20250905210756 tags: - generated_from_trainer - hf_jobs - sft - trl licence: license --- # Model Card for Qwen3-32B-SFT-20250905210756 This model is a fine-tuned version of [Qwen/Qwen3-32B](https://huggingface.co/Qwen/Qwen3-32B) on the [trl-lib/Capybara](https://huggingface.co/datasets/trl-lib/Capybara) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="qgallouedec/Qwen3-32B-SFT-20250905210756", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.23.0.dev0 - Transformers: 4.56.1 - Pytorch: 2.8.0+cu128 - Datasets: 4.0.0 - Tokenizers: 0.22.0 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
rbcurzon/marian-finetuned-mdh-to-en
rbcurzon
2025-09-06T04:55:15Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "marian", "text2text-generation", "generated_from_trainer", "mdh", "en", "dataset:rbcurzon/private_processed_magindanaon_bitexts", "base_model:Helsinki-NLP/opus-mt-ceb-en", "base_model:finetune:Helsinki-NLP/opus-mt-ceb-en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-09-05T06:16:50Z
--- library_name: transformers language: - mdh - en license: apache-2.0 base_model: Helsinki-NLP/opus-mt-ceb-en tags: - generated_from_trainer datasets: - rbcurzon/private_processed_magindanaon_bitexts model-index: - name: marian-finetuned-mdh-to-en results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # marian-finetuned-mdh-to-en This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ceb-en](https://huggingface.co/Helsinki-NLP/opus-mt-ceb-en) on the rbcurzon/private_processed_magindanaon_bitexts dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.57.0.dev0 - Pytorch 2.8.0+cu126 - Datasets 4.0.0 - Tokenizers 0.22.0
pouiiq/blockassist-bc-screeching_grazing_anaconda_1757134486
pouiiq
2025-09-06T04:55:05Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "screeching grazing anaconda", "arxiv:2504.07091", "region:us" ]
null
2025-09-06T04:54:47Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - screeching grazing anaconda --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
vwzyrraz7l/blockassist-bc-tall_hunting_vulture_1757132977
vwzyrraz7l
2025-09-06T04:54:52Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "tall hunting vulture", "arxiv:2504.07091", "region:us" ]
null
2025-09-06T04:54:48Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - tall hunting vulture --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
sivakrishna123/my-jarvis-adapters
sivakrishna123
2025-09-06T04:53:04Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "gpt2", "trl", "en", "base_model:openai-community/gpt2", "base_model:finetune:openai-community/gpt2", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-09-06T03:36:16Z
--- base_model: openai-community/gpt2 tags: - text-generation-inference - transformers - gpt2 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** sivakrishna123 - **License:** apache-2.0 - **Finetuned from model :** openai-community/gpt2 This gpt2 model was trained 2x faster with Huggingface's TRL library.
sivakrishna123/JARVIS_v0.1.Q4_K_M.gguf_Prototype
sivakrishna123
2025-09-06T04:51:25Z
80
0
transformers
[ "transformers", "safetensors", "gguf", "llama", "text-generation-inference", "gpt2", "arxiv:1910.09700", "endpoints_compatible", "region:us", "conversational" ]
null
2025-08-22T13:30:43Z
--- library_name: transformers tags: - text-generation-inference - transformers - gpt2 - gguf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
dwoprer/blockassist-bc-rabid_hoarse_turkey_1757134222
dwoprer
2025-09-06T04:50:48Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "rabid hoarse turkey", "arxiv:2504.07091", "region:us" ]
null
2025-09-06T04:50:23Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - rabid hoarse turkey --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
aleebaster/blockassist-bc-sly_eager_boar_1757132564
aleebaster
2025-09-06T04:49:18Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "sly eager boar", "arxiv:2504.07091", "region:us" ]
null
2025-09-06T04:49:10Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - sly eager boar --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
tahakurt/blockassist-bc-soft_lithe_ostrich_1757134044
tahakurt
2025-09-06T04:48:05Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "soft lithe ostrich", "arxiv:2504.07091", "region:us" ]
null
2025-09-06T04:47:57Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - soft lithe ostrich --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
bah63843/blockassist-bc-plump_fast_antelope_1757133951
bah63843
2025-09-06T04:46:38Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "plump fast antelope", "arxiv:2504.07091", "region:us" ]
null
2025-09-06T04:46:30Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - plump fast antelope --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
allytopic/base3.2-16bit
allytopic
2025-09-06T04:45:17Z
0
0
null
[ "safetensors", "llama", "license:apache-2.0", "region:us" ]
null
2025-09-06T04:43:29Z
--- license: apache-2.0 ---
mosesshah/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-savage_scaly_chameleon
mosesshah
2025-09-06T04:43:00Z
181
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am savage_scaly_chameleon", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-03T21:18:17Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am savage_scaly_chameleon --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
akunode/blockassist-bc-long_prickly_eel_1757133705
akunode
2025-09-06T04:42:26Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "long prickly eel", "arxiv:2504.07091", "region:us" ]
null
2025-09-06T04:42:19Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - long prickly eel --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
jnjnkj/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-amphibious_climbing_raven
jnjnkj
2025-09-06T04:41:24Z
144
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am amphibious_climbing_raven", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-18T11:47:55Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am amphibious_climbing_raven --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
APBEUSI/model-list
APBEUSI
2025-09-06T04:39:24Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-09-06T03:26:19Z
--- license: apache-2.0 ---
cwayneconnor/blockassist-bc-mute_loud_lynx_1757133271
cwayneconnor
2025-09-06T04:39:17Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "mute loud lynx", "arxiv:2504.07091", "region:us" ]
null
2025-09-06T04:39:10Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - mute loud lynx --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ertghiu256/Qwen3-4b-tcomanr-merge-v2.3
ertghiu256
2025-09-06T04:38:56Z
0
1
transformers
[ "transformers", "safetensors", "gguf", "qwen3", "text-generation", "mergekit", "merge", "thinking", "think", "reasoning", "reason", "code", "math", "qwen", "conversational", "en", "arxiv:2306.01708", "base_model:GetSoloTech/Qwen3-Code-Reasoning-4B", "base_model:merge:GetSoloTech/Qwen3-Code-Reasoning-4B", "base_model:POLARIS-Project/Polaris-4B-Preview", "base_model:merge:POLARIS-Project/Polaris-4B-Preview", "base_model:Qwen/Qwen3-4B-Thinking-2507", "base_model:merge:Qwen/Qwen3-4B-Thinking-2507", "base_model:Tesslate/UIGEN-T3-4B-Preview-MAX", "base_model:merge:Tesslate/UIGEN-T3-4B-Preview-MAX", "base_model:ValiantLabs/Qwen3-4B-Esper3", "base_model:merge:ValiantLabs/Qwen3-4B-Esper3", "base_model:ValiantLabs/Qwen3-4B-ShiningValiant3", "base_model:merge:ValiantLabs/Qwen3-4B-ShiningValiant3", "base_model:ertghiu256/Qwen3-4B-Thinking-2507-Hermes-3", "base_model:merge:ertghiu256/Qwen3-4B-Thinking-2507-Hermes-3", "base_model:ertghiu256/Qwen3-Hermes-4b", "base_model:merge:ertghiu256/Qwen3-Hermes-4b", "base_model:ertghiu256/qwen-3-4b-mixture-of-thought", "base_model:merge:ertghiu256/qwen-3-4b-mixture-of-thought", "base_model:ertghiu256/qwen3-4b-code-reasoning", "base_model:merge:ertghiu256/qwen3-4b-code-reasoning", "base_model:ertghiu256/qwen3-math-reasoner", "base_model:merge:ertghiu256/qwen3-math-reasoner", "base_model:ertghiu256/qwen3-multi-reasoner", "base_model:merge:ertghiu256/qwen3-multi-reasoner", "base_model:huihui-ai/Huihui-Qwen3-4B-Thinking-2507-abliterated", "base_model:merge:huihui-ai/Huihui-Qwen3-4B-Thinking-2507-abliterated", "base_model:janhq/Jan-v1-4B", "base_model:merge:janhq/Jan-v1-4B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-28T15:16:12Z
--- base_model: - ertghiu256/qwen-3-4b-mixture-of-thought - Tesslate/UIGEN-T3-4B-Preview-MAX - ertghiu256/Qwen3-4B-Thinking-2507-Hermes-3 - ValiantLabs/Qwen3-4B-ShiningValiant3 - ertghiu256/qwen3-math-reasoner - Qwen/Qwen3-4B-Thinking-2507 - ValiantLabs/Qwen3-4B-Esper3 - Qwen/Qwen3-4b-Instruct-2507 - ertghiu256/qwen3-multi-reasoner - janhq/Jan-v1-4B - ertghiu256/qwen3-4b-code-reasoning - ertghiu256/Qwen3-Hermes-4b - GetSoloTech/Qwen3-Code-Reasoning-4B - POLARIS-Project/Polaris-4B-Preview - huihui-ai/Huihui-Qwen3-4B-Thinking-2507-abliterated library_name: transformers tags: - mergekit - merge - thinking - think - reasoning - reason - code - math - qwen - qwen3 language: - en --- # Ties merged COde MAth aNd Reasoning model This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details This model is a revision of the [ertghiu256/Qwen3-4b-tcomanr-merge-v2.2](https://huggingface.co/ertghiu256/Qwen3-4b-tcomanr-merge-v2.2/) This model aims to combine the reasoning, code, and math capabilities of Qwen3 4b 2507 reasoning by merging it with some other Qwen3 finetunes. This model reasoning is very long. # How to run You can run this model by using multiple interface choices ## Transformers As the qwen team suggested to use ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "ertghiu256/Qwen3-4b-tcomanr-merge-v2.3" # load the tokenizer and the model tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) # prepare the model input prompt = "Give me a short introduction to large language model." messages = [ {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) # conduct text completion generated_ids = model.generate( **model_inputs, max_new_tokens=32768 ) output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist() # parsing thinking content try: # rindex finding 151668 (</think>) index = len(output_ids) - output_ids[::-1].index(151668) except ValueError: index = 0 thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n") content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n") print("thinking content:", thinking_content) # no opening <think> tag print("content:", content) ``` ## Vllm Run this command ```bash vllm serve ertghiu256/Qwen3-4b-tcomanr-merge-v2.3 --enable-reasoning --reasoning-parser deepseek_r1 ``` ## Sglang Run this command ```bash python -m sglang.launch_server --model-path ertghiu256/Qwen3-4b-tcomanr-merge-v2.3 --reasoning-parser deepseek-r1 ``` ## llama.cpp Run this command ```bash llama-server --hf-repo ertghiu256/Qwen3-4b-tcomanr-merge-v2.3 ``` or ```bash llama-cli --hf ertghiu256/Qwen3-4b-tcomanr-merge-v2.3 ``` ## Ollama Run this command ```bash ollama run hf.co/ertghiu256/Qwen3-4b-tcomanr-merge-v2.3:Q8_0 ``` or ```bash ollama run hf.co/ertghiu256/Qwen3-4b-tcomanr-merge-v2.3:Q5_K_M ``` or ```bash ollama run hf.co/ertghiu256/Qwen3-4b-tcomanr-merge-v2.3:IQ4_NL ``` ## LM Studio Search ``` ertghiu256/Qwen3-4b-tcomanr-merge-v2.3 ``` in the lm studio model search list then download ### Recomended parameters ``` temp: 0.6 num_ctx: ≥8192 top_p: 0.95 top_k: 20 Repeat Penalty: 1.1 ``` ### Merge Method This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [Qwen/Qwen3-4B-Thinking-2507](https://huggingface.co/Qwen/Qwen3-4B-Thinking-2507) as a base. ### Models Merged The following models were included in the merge: * [ertghiu256/qwen-3-4b-mixture-of-thought](https://huggingface.co/ertghiu256/qwen-3-4b-mixture-of-thought) * [Tesslate/UIGEN-T3-4B-Preview-MAX](https://huggingface.co/Tesslate/UIGEN-T3-4B-Preview-MAX) * [ertghiu256/Qwen3-4B-Thinking-2507-Hermes-3](https://huggingface.co/ertghiu256/Qwen3-4B-Thinking-2507-Hermes-3) * [ValiantLabs/Qwen3-4B-ShiningValiant3](https://huggingface.co/ValiantLabs/Qwen3-4B-ShiningValiant3) * [ertghiu256/qwen3-math-reasoner](https://huggingface.co/ertghiu256/qwen3-math-reasoner) * [ValiantLabs/Qwen3-4B-Esper3](https://huggingface.co/ValiantLabs/Qwen3-4B-Esper3) * [Qwen/Qwen3-4b-Instruct-2507](https://huggingface.co/Qwen/Qwen3-4b-Instruct-2507) * [ertghiu256/qwen3-multi-reasoner](https://huggingface.co/ertghiu256/qwen3-multi-reasoner) * [janhq/Jan-v1-4B](https://huggingface.co/janhq/Jan-v1-4B) * [ertghiu256/qwen3-4b-code-reasoning](https://huggingface.co/ertghiu256/qwen3-4b-code-reasoning) * [ertghiu256/Qwen3-Hermes-4b](https://huggingface.co/ertghiu256/Qwen3-Hermes-4b) * [GetSoloTech/Qwen3-Code-Reasoning-4B](https://huggingface.co/GetSoloTech/Qwen3-Code-Reasoning-4B) * [POLARIS-Project/Polaris-4B-Preview](https://huggingface.co/POLARIS-Project/Polaris-4B-Preview) * [huihui-ai/Huihui-Qwen3-4B-Thinking-2507-abliterated](https://huggingface.co/huihui-ai/Huihui-Qwen3-4B-Thinking-2507-abliterated) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: ertghiu256/qwen3-math-reasoner parameters: weight: 0.8 - model: ertghiu256/qwen3-4b-code-reasoning parameters: weight: 0.9 - model: ertghiu256/qwen-3-4b-mixture-of-thought parameters: weight: 1.0 - model: POLARIS-Project/Polaris-4B-Preview parameters: weight: 0.8 - model: ertghiu256/qwen3-multi-reasoner parameters: weight: 0.9 - model: ertghiu256/Qwen3-Hermes-4b parameters: weight: 0.7 - model: ValiantLabs/Qwen3-4B-Esper3 parameters: weight: 0.75 - model: Tesslate/UIGEN-T3-4B-Preview-MAX parameters: weight: 1.0 - model: ValiantLabs/Qwen3-4B-ShiningValiant3 parameters: weight: 0.6 density: 0.5 - model: huihui-ai/Huihui-Qwen3-4B-Thinking-2507-abliterated parameters: weight: 0.75 - model: Qwen/Qwen3-4B-Thinking-2507 parameters: weight: 1.0 - model: Qwen/Qwen3-4b-Instruct-2507 parameters: weight: 0.75 - model: GetSoloTech/Qwen3-Code-Reasoning-4B parameters: weight: 0.75 density: 0.55 - model: ertghiu256/Qwen3-4B-Thinking-2507-Hermes-3 parameters: weight: 1.0 - model: janhq/Jan-v1-4B parameters: weight: 0.3 merge_method: ties base_model: Qwen/Qwen3-4B-Thinking-2507 parameters: normalize: true int8_mask: true lambda: 1.0 dtype: float16 ```
bah63843/blockassist-bc-plump_fast_antelope_1757133450
bah63843
2025-09-06T04:38:20Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "plump fast antelope", "arxiv:2504.07091", "region:us" ]
null
2025-09-06T04:38:12Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - plump fast antelope --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Rajaa2112/blockassist-bc-quiet_omnivorous_otter_1757132234
Rajaa2112
2025-09-06T04:36:59Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "quiet omnivorous otter", "arxiv:2504.07091", "region:us" ]
null
2025-09-06T04:36:15Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - quiet omnivorous otter --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
vansh-khaneja/Qwen2.5-1.5B-Instruct-FineTuned
vansh-khaneja
2025-09-06T04:34:41Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "sft", "trl", "base_model:Qwen/Qwen2.5-1.5B-Instruct", "base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-09-06T04:34:37Z
--- base_model: Qwen/Qwen2.5-1.5B-Instruct library_name: transformers model_name: Qwen2.5-1.5B-Instruct-FineTuned tags: - generated_from_trainer - sft - trl licence: license --- # Model Card for Qwen2.5-1.5B-Instruct-FineTuned This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="vansh-khaneja/Qwen2.5-1.5B-Instruct-FineTuned", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.22.2 - Transformers: 4.56.0 - Pytorch: 2.8.0+cu129 - Datasets: 4.0.0 - Tokenizers: 0.22.0 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
calegpedia/blockassist-bc-stealthy_slimy_rooster_1757131608
calegpedia
2025-09-06T04:33:28Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "stealthy slimy rooster", "arxiv:2504.07091", "region:us" ]
null
2025-09-06T04:33:24Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - stealthy slimy rooster --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
CHRISPI09/blockassist-bc-galloping_thick_tuna_1757133167
CHRISPI09
2025-09-06T04:33:23Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "galloping thick tuna", "arxiv:2504.07091", "region:us" ]
null
2025-09-06T04:33:19Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - galloping thick tuna --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Agentum07/q-taxi-v3
Agentum07
2025-09-06T04:32:43Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2025-09-06T04:32:39Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.52 +/- 2.77 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="Agentum07/q-taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
ronitmevadaofficial/kusiAI
ronitmevadaofficial
2025-09-06T04:29:49Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-09-06T04:29:49Z
--- license: apache-2.0 ---
Dineochiloane/gemma-3-4b-isizulu-inkuba-bidirectional
Dineochiloane
2025-09-06T04:27:58Z
0
0
peft
[ "peft", "safetensors", "machine-translation", "isizulu", "african-languages", "gemma", "lora", "bidirectional", "zu", "en", "dataset:lelapa/Inkuba-instruct", "base_model:google/gemma-3-4b-it", "base_model:adapter:google/gemma-3-4b-it", "license:gemma", "region:us" ]
null
2025-09-06T03:13:19Z
--- language: - zu - en base_model: google/gemma-3-4b-it tags: - machine-translation - isizulu - african-languages - gemma - peft - lora - bidirectional datasets: - lelapa/Inkuba-instruct license: gemma widget: - text: "Translate this from isiZulu to English: Sawubona, unjani?" example_title: "isiZulu to English" - text: "Translate this from English to isiZulu: Hello, how are you?" example_title: "English to isiZulu" --- # Bidirectional isiZulu↔English Translation Model Fine-tuned model for bidirectional translation between isiZulu and English with improved hyperparameters. ## Model Details - **Base Model**: google/gemma-3-4b-it - **Task**: Bidirectional isiZulu ↔ English Translation - **Training Examples**: 50,000 (both directions) - **Prompt Formats**: - "Translate this from isiZulu to English: [text]" - "Translate this from English to isiZulu: [text]" ## Training Configuration ### LoRA Parameters - **Rank**: 16 - **Alpha**: 16 - **Dropout**: 0.15 - **Target Modules**: all-linear ### Training Parameters - **Learning Rate**: 0.0001 - **Epochs**: 2 - **Batch Size**: 8 - **Gradient Accumulation**: 8 - **Effective Batch Size**: 64 ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM from peft import PeftModel # Load model base_model = AutoModelForCausalLM.from_pretrained("google/gemma-3-4b-it") tokenizer = AutoTokenizer.from_pretrained("google/gemma-3-4b-it") model = PeftModel.from_pretrained(base_model, "Dineochiloane/gemma-3-4b-isizulu-inkuba-bidirectional") # Translate Zulu to English messages = [{"role": "user", "content": "Translate this from isiZulu to English: Ngiyabonga kakhulu"}] input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt") outputs = model.generate(input_ids, max_new_tokens=50, temperature=0.7, repetition_penalty=1.2) response = tokenizer.decode(outputs[0], skip_special_tokens=True) print(response) # Translate English to Zulu messages = [{"role": "user", "content": "Translate this from English to isiZulu: Thank you very much"}] input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt") outputs = model.generate(input_ids, max_new_tokens=50, temperature=0.7, repetition_penalty=1.2) response = tokenizer.decode(outputs[0], skip_special_tokens=True) print(response) ``` ## Dataset Information - **Source**: lelapa/Inkuba-instruct (isiZulu train split) - **Filtering**: MMT task + contains "isingisi" (English) - **Training Strategy**: Bidirectional (both Zulu→English and English→Zulu) - **Original Examples**: 25,000 - **Total Training Examples**: 50,000 (doubled for bidirectionality) ## Improvements in Bidirectional Version - **Bidirectional capability**: Can translate both Zulu→English and English→Zulu - **Improved hyperparameters**: Lower learning rate and higher dropout for better generalization - **Reduced epochs**: Compensates for doubled training data - **Better generation**: Recommended to use temperature=0.7 and repetition_penalty=1.2
fakir22/blockassist-bc-flapping_peaceful_caterpillar_1757132836
fakir22
2025-09-06T04:27:56Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "flapping peaceful caterpillar", "arxiv:2504.07091", "region:us" ]
null
2025-09-06T04:27:53Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - flapping peaceful caterpillar --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
david3621/blockassist-bc-gentle_meek_cat_1757131710
david3621
2025-09-06T04:27:27Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "gentle meek cat", "arxiv:2504.07091", "region:us" ]
null
2025-09-06T04:24:52Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - gentle meek cat --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
hakimjustbao/blockassist-bc-raging_subtle_wasp_1757131175
hakimjustbao
2025-09-06T04:26:31Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "raging subtle wasp", "arxiv:2504.07091", "region:us" ]
null
2025-09-06T04:26:28Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - raging subtle wasp --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
HuggingKola/medvision4_lor
HuggingKola
2025-09-06T04:25:32Z
0
0
transformers
[ "transformers", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-06T04:25:25Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
HuggingKola/medvision4_lora
HuggingKola
2025-09-06T04:25:24Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "gemma3", "trl", "en", "base_model:unsloth/medgemma-4b-it-unsloth-bnb-4bit", "base_model:finetune:unsloth/medgemma-4b-it-unsloth-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-09-06T04:25:11Z
--- base_model: unsloth/medgemma-4b-it-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - gemma3 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** HuggingKola - **License:** apache-2.0 - **Finetuned from model :** unsloth/medgemma-4b-it-unsloth-bnb-4bit This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
moonuio/blockassist-bc-untamed_elusive_buffalo_1757132687
moonuio
2025-09-06T04:25:07Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "untamed elusive buffalo", "arxiv:2504.07091", "region:us" ]
null
2025-09-06T04:24:48Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - untamed elusive buffalo --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Agentum07/q-FrozenLake-v1-4x4-noSlippery
Agentum07
2025-09-06T04:22:51Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2025-09-06T04:22:47Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Agentum07/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
bah63843/blockassist-bc-plump_fast_antelope_1757132420
bah63843
2025-09-06T04:21:08Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "plump fast antelope", "arxiv:2504.07091", "region:us" ]
null
2025-09-06T04:21:01Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - plump fast antelope --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
arka7/Qwen2.5_3B-GRPO-medical-reasoning
arka7
2025-09-06T04:20:37Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "8-bit", "region:us" ]
text-generation
2025-09-06T04:16:44Z
--- base_model: unsloth/qwen2.5-3b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2 license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** arka7 - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2.5-3b-unsloth-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
JunHowie/Qwen3-30B-A3B-GPTQ-Int8
JunHowie
2025-09-06T04:19:37Z
18
0
transformers
[ "transformers", "safetensors", "qwen3_moe", "text-generation", "Qwen3", "GPTQ", "Int8", "量化修复", "vLLM", "conversational", "arxiv:2309.00071", "arxiv:2505.09388", "base_model:Qwen/Qwen3-30B-A3B", "base_model:quantized:Qwen/Qwen3-30B-A3B", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "8-bit", "region:us" ]
text-generation
2025-05-13T09:51:16Z
--- library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-30B-A3B/blob/main/LICENSE pipeline_tag: text-generation tags: - Qwen3 - GPTQ - Int8 - 量化修复 - vLLM base_model: - Qwen/Qwen3-30B-A3B base_model_relation: quantized --- # Qwen3-30B-A3B-GPTQ-Int8 Base model: [Qwen/Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B) <i>This model is quantized to 8-bit with a group size of 128.</i> <br> <i>Compared to earlier quantized versions, the new quantized model demonstrates better tokens/s efficiency. This improvement comes from setting desc_act=False in the quantization configuration.</i> ``` vllm serve JunHowie/Qwen3-30B-A3B-GPTQ-Int8 ``` ### 【Dependencies】 ``` vllm>=0.9.2 ``` ### 【Model Download】 ```python from huggingface_hub import snapshot_download snapshot_download('JunHowie/Qwen3-30B-A3B-GPTQ-Int8', cache_dir="your_local_path") ``` ### 【Overview】 # Qwen3-30B-A3B <a href="https://chat.qwen.ai/" target="_blank" style="margin: 2px;"> <img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/> </a> ## Qwen3 Highlights Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features: - **Uniquely support of seamless switching between thinking mode** (for complex logical reasoning, math, and coding) and **non-thinking mode** (for efficient, general-purpose dialogue) **within single model**, ensuring optimal performance across various scenarios. - **Significantly enhancement in its reasoning capabilities**, surpassing previous QwQ (in thinking mode) and Qwen2.5 instruct models (in non-thinking mode) on mathematics, code generation, and commonsense logical reasoning. - **Superior human preference alignment**, excelling in creative writing, role-playing, multi-turn dialogues, and instruction following, to deliver a more natural, engaging, and immersive conversational experience. - **Expertise in agent capabilities**, enabling precise integration with external tools in both thinking and unthinking modes and achieving leading performance among open-source models in complex agent-based tasks. - **Support of 100+ languages and dialects** with strong capabilities for **multilingual instruction following** and **translation**. ## Model Overview **Qwen3-30B-A3B** has the following features: - Type: Causal Language Models - Training Stage: Pretraining & Post-training - Number of Parameters: 30.5B in total and 3.3B activated - Number of Paramaters (Non-Embedding): 29.9B - Number of Layers: 48 - Number of Attention Heads (GQA): 32 for Q and 4 for KV - Number of Experts: 128 - Number of Activated Experts: 8 - Context Length: 32,768 natively and [131,072 tokens with YaRN](#processing-long-texts). For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/). ## Quickstart The code of Qwen3-MoE has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`. With `transformers<4.51.0`, you will encounter the following error: ``` KeyError: 'qwen3_moe' ``` The following contains a code snippet illustrating how to use the model generate content based on given inputs. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "Qwen/Qwen3-30B-A3B" # load the tokenizer and the model tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) # prepare the model input prompt = "Give me a short introduction to large language model." messages = [ {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, enable_thinking=True # Switches between thinking and non-thinking modes. Default is True. ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) # conduct text completion generated_ids = model.generate( **model_inputs, max_new_tokens=32768 ) output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist() # parsing thinking content try: # rindex finding 151668 (</think>) index = len(output_ids) - output_ids[::-1].index(151668) except ValueError: index = 0 thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n") content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n") print("thinking content:", thinking_content) print("content:", content) ``` For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.5` or to create an OpenAI-compatible API endpoint: - SGLang: ```shell python -m sglang.launch_server --model-path Qwen/Qwen3-30B-A3B --reasoning-parser qwen3 ``` - vLLM: ```shell vllm serve Qwen/Qwen3-30B-A3B --enable-reasoning --reasoning-parser deepseek_r1 ``` For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3. ## Switching Between Thinking and Non-Thinking Mode > [!TIP] > The `enable_thinking` switch is also available in APIs created by SGLang and vLLM. > Please refer to our documentation for [SGLang](https://qwen.readthedocs.io/en/latest/deployment/sglang.html#thinking-non-thinking-modes) and [vLLM](https://qwen.readthedocs.io/en/latest/deployment/vllm.html#thinking-non-thinking-modes) users. ### `enable_thinking=True` By default, Qwen3 has thinking capabilities enabled, similar to QwQ-32B. This means the model will use its reasoning abilities to enhance the quality of generated responses. For example, when explicitly setting `enable_thinking=True` or leaving it as the default value in `tokenizer.apply_chat_template`, the model will engage its thinking mode. ```python text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, enable_thinking=True # True is the default value for enable_thinking ) ``` In this mode, the model will generate think content wrapped in a `<think>...</think>` block, followed by the final response. > [!NOTE] > For thinking mode, use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0` (the default setting in `generation_config.json`). **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions. For more detailed guidance, please refer to the [Best Practices](#best-practices) section. ### `enable_thinking=False` We provide a hard switch to strictly disable the model's thinking behavior, aligning its functionality with the previous Qwen2.5-Instruct models. This mode is particularly useful in scenarios where disabling thinking is essential for enhancing efficiency. ```python text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, enable_thinking=False # Setting enable_thinking=False disables thinking mode ) ``` In this mode, the model will not generate any think content and will not include a `<think>...</think>` block. > [!NOTE] > For non-thinking mode, we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. For more detailed guidance, please refer to the [Best Practices](#best-practices) section. ### Advanced Usage: Switching Between Thinking and Non-Thinking Modes via User Input We provide a soft switch mechanism that allows users to dynamically control the model's behavior when `enable_thinking=True`. Specifically, you can add `/think` and `/no_think` to user prompts or system messages to switch the model's thinking mode from turn to turn. The model will follow the most recent instruction in multi-turn conversations. Here is an example of a multi-turn conversation: ```python from transformers import AutoModelForCausalLM, AutoTokenizer class QwenChatbot: def __init__(self, model_name="Qwen/Qwen3-30B-A3B"): self.tokenizer = AutoTokenizer.from_pretrained(model_name) self.model = AutoModelForCausalLM.from_pretrained(model_name) self.history = [] def generate_response(self, user_input): messages = self.history + [{"role": "user", "content": user_input}] text = self.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) inputs = self.tokenizer(text, return_tensors="pt") response_ids = self.model.generate(**inputs, max_new_tokens=32768)[0][len(inputs.input_ids[0]):].tolist() response = self.tokenizer.decode(response_ids, skip_special_tokens=True) # Update history self.history.append({"role": "user", "content": user_input}) self.history.append({"role": "assistant", "content": response}) return response # Example Usage if __name__ == "__main__": chatbot = QwenChatbot() # First input (without /think or /no_think tags, thinking mode is enabled by default) user_input_1 = "How many r's in strawberries?" print(f"User: {user_input_1}") response_1 = chatbot.generate_response(user_input_1) print(f"Bot: {response_1}") print("----------------------") # Second input with /no_think user_input_2 = "Then, how many r's in blueberries? /no_think" print(f"User: {user_input_2}") response_2 = chatbot.generate_response(user_input_2) print(f"Bot: {response_2}") print("----------------------") # Third input with /think user_input_3 = "Really? /think" print(f"User: {user_input_3}") response_3 = chatbot.generate_response(user_input_3) print(f"Bot: {response_3}") ``` > [!NOTE] > For API compatibility, when `enable_thinking=True`, regardless of whether the user uses `/think` or `/no_think`, the model will always output a block wrapped in `<think>...</think>`. However, the content inside this block may be empty if thinking is disabled. > When `enable_thinking=False`, the soft switches are not valid. Regardless of any `/think` or `/no_think` tags input by the user, the model will not generate think content and will not include a `<think>...</think>` block. ## Agentic Use Qwen3 excels in tool calling capabilities. We recommend using [Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity. To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself. ```python from qwen_agent.agents import Assistant # Define LLM llm_cfg = { 'model': 'Qwen3-30B-A3B', # Use the endpoint provided by Alibaba Model Studio: # 'model_type': 'qwen_dashscope', # 'api_key': os.getenv('DASHSCOPE_API_KEY'), # Use a custom endpoint compatible with OpenAI API: 'model_server': 'http://localhost:8000/v1', # api_base 'api_key': 'EMPTY', # Other parameters: # 'generate_cfg': { # # Add: When the response content is `<think>this is the thought</think>this is the answer; # # Do not add: When the response has been separated by reasoning_content and content. # 'thought_in_content': True, # }, } # Define Tools tools = [ {'mcpServers': { # You can specify the MCP configuration file 'time': { 'command': 'uvx', 'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai'] }, "fetch": { "command": "uvx", "args": ["mcp-server-fetch"] } } }, 'code_interpreter', # Built-in tools ] # Define Agent bot = Assistant(llm=llm_cfg, function_list=tools) # Streaming generation messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}] for responses in bot.run(messages=messages): pass print(responses) ``` ## Processing Long Texts Qwen3 natively supports context lengths of up to 32,768 tokens. For conversations where the total length (including both input and output) significantly exceeds this limit, we recommend using RoPE scaling techniques to handle long texts effectively. We have validated the model's performance on context lengths of up to 131,072 tokens using the [YaRN](https://arxiv.org/abs/2309.00071) method. YaRN is currently supported by several inference frameworks, e.g., `transformers` and `llama.cpp` for local use, `vllm` and `sglang` for deployment. In general, there are two approaches to enabling YaRN for supported frameworks: - Modifying the model files: In the `config.json` file, add the `rope_scaling` fields: ```json { ..., "rope_scaling": { "rope_type": "yarn", "factor": 4.0, "original_max_position_embeddings": 32768 } } ``` For `llama.cpp`, you need to regenerate the GGUF file after the modification. - Passing command line arguments: For `vllm`, you can use ```shell vllm serve ... --rope-scaling '{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}' --max-model-len 131072 ``` For `sglang`, you can use ```shell python -m sglang.launch_server ... --json-model-override-args '{"rope_scaling":{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}}' ``` For `llama-server` from `llama.cpp`, you can use ```shell llama-server ... --rope-scaling yarn --rope-scale 4 --yarn-orig-ctx 32768 ``` > [!IMPORTANT] > If you encounter the following warning > ``` > Unrecognized keys in `rope_scaling` for 'rope_type'='yarn': {'original_max_position_embeddings'} > ``` > please upgrade `transformers>=4.51.0`. > [!NOTE] > All the notable open-source frameworks implement static YaRN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts.** > We advise adding the `rope_scaling` configuration only when processing long contexts is required. > It is also recommended to modify the `factor` as needed. For example, if the typical context length for your application is 65,536 tokens, it would be better to set `factor` as 2.0. > [!NOTE] > The default `max_position_embeddings` in `config.json` is set to 40,960. This allocation includes reserving 32,768 tokens for outputs and 8,192 tokens for typical prompts, which is sufficient for most scenarios involving short text processing. If the average context length does not exceed 32,768 tokens, we do not recommend enabling YaRN in this scenario, as it may potentially degrade model performance. > [!TIP] > The endpoint provided by Alibaba Model Studio supports dynamic YaRN by default and no extra configuration is needed. ## Best Practices To achieve optimal performance, we recommend the following settings: 1. **Sampling Parameters**: - For thinking mode (`enable_thinking=True`), use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0`. **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions. - For non-thinking mode (`enable_thinking=False`), we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. - For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance. 2. **Adequate Output Length**: We recommend using an output length of 32,768 tokens for most queries. For benchmarking on highly complex problems, such as those found in math and programming competitions, we suggest setting the max output length to 38,912 tokens. This provides the model with sufficient space to generate detailed and comprehensive responses, thereby enhancing its overall performance. 3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking. - **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt. - **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`." 4. **No Thinking Content in History**: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. It is implemented in the provided chat template in Jinja2. However, for frameworks that do not directly use the Jinja2 chat template, it is up to the developers to ensure that the best practice is followed. ### Citation If you find our work helpful, feel free to give us a cite. ``` @misc{qwen3technicalreport, title={Qwen3 Technical Report}, author={Qwen Team}, year={2025}, eprint={2505.09388}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2505.09388}, } ```
shahidul034/Translation_Evaluator_Qwen3_14B_v1
shahidul034
2025-09-06T04:19:29Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen3", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-09-06T04:19:17Z
--- base_model: unsloth/qwen3-14b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen3 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** shahidul034 - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen3-14b-unsloth-bnb-4bit This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
aliarda/llama-50M-latest
aliarda
2025-09-06T04:18:40Z
0
1
null
[ "tr", "dataset:aliarda/turkish-news-1.8M-tokenized", "base_model:aliarda/llama-50M-randParams", "base_model:finetune:aliarda/llama-50M-randParams", "license:apache-2.0", "region:us" ]
null
2025-09-03T18:56:00Z
--- license: apache-2.0 datasets: - aliarda/turkish-news-1.8M-tokenized language: - tr base_model: - aliarda/llama-50M-randParams --- This is a llama model with ~50M parameters. You can use modeling files from [this GitHub repo](https://github.com/ardafincan/LM-playground). - Model Size: 52,177,152 - Vocab Size: 32,768 - Context Length: 512 - Embedding Dimension: 256 - Attention Heads: 128 - KV Groups: 64 - Hidden Dimension: 2048 - Number of Layers: 20
Watch-videos-Laken-Snelling-Case/Original.full.videos.Laken.Snelling.Case.viral.video.Official.Tutorial
Watch-videos-Laken-Snelling-Case
2025-09-06T04:17:39Z
0
0
null
[ "region:us" ]
null
2025-09-06T04:17:23Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
fakir22/blockassist-bc-flapping_peaceful_caterpillar_1757132181
fakir22
2025-09-06T04:17:01Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "flapping peaceful caterpillar", "arxiv:2504.07091", "region:us" ]
null
2025-09-06T04:16:57Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - flapping peaceful caterpillar --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
youuotty/blockassist-bc-alert_hardy_toad_1757132131
youuotty
2025-09-06T04:15:48Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "alert hardy toad", "arxiv:2504.07091", "region:us" ]
null
2025-09-06T04:15:31Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - alert hardy toad --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
SubhrajitSain/anwgpt2-345m
SubhrajitSain
2025-09-06T04:14:18Z
0
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:SubhrajitSain/anwgpt2-345m", "base_model:finetune:SubhrajitSain/anwgpt2-345m", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-05T17:17:51Z
--- library_name: transformers license: mit base_model: SubhrajitSain/anwgpt2-345m tags: - generated_from_trainer model-index: - name: anwgpt2-345m results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # anwgpt2-345m This model is a fine-tuned version of [SubhrajitSain/anwgpt2-345m](https://huggingface.co/SubhrajitSain/anwgpt2-345m) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 15 ### Framework versions - Transformers 4.56.0 - Pytorch 2.8.0+cu126 - Datasets 2.14.6 - Tokenizers 0.22.0
befox/WAN2.2-14B-Rapid-AllInOne-GGUF
befox
2025-09-06T04:12:49Z
373
7
null
[ "gguf", "base_model:Phr00t/WAN2.2-14B-Rapid-AllInOne", "base_model:quantized:Phr00t/WAN2.2-14B-Rapid-AllInOne", "region:us" ]
null
2025-09-03T05:08:41Z
--- base_model: - Phr00t/WAN2.2-14B-Rapid-AllInOne --- GGUF version of [Phr00t/WAN2.2-14B-Rapid-AllInOne](https://huggingface.co/Phr00t/WAN2.2-14B-Rapid-AllInOne)
Miracle-man/blockassist-bc-singing_lithe_koala_1757130106
Miracle-man
2025-09-06T04:11:32Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "singing lithe koala", "arxiv:2504.07091", "region:us" ]
null
2025-09-06T04:11:29Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - singing lithe koala --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
bah63843/blockassist-bc-plump_fast_antelope_1757131689
bah63843
2025-09-06T04:09:04Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "plump fast antelope", "arxiv:2504.07091", "region:us" ]
null
2025-09-06T04:08:57Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - plump fast antelope --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
weruior/blockassist-bc-jagged_pudgy_porcupine_1757131716
weruior
2025-09-06T04:08:58Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "jagged pudgy porcupine", "arxiv:2504.07091", "region:us" ]
null
2025-09-06T04:08:37Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - jagged pudgy porcupine --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
hobson123/blockassist-bc-mammalian_dense_gibbon_1757131337
hobson123
2025-09-06T04:03:53Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "mammalian dense gibbon", "arxiv:2504.07091", "region:us" ]
null
2025-09-06T04:03:28Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - mammalian dense gibbon --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
bah63843/blockassist-bc-plump_fast_antelope_1757131333
bah63843
2025-09-06T04:03:00Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "plump fast antelope", "arxiv:2504.07091", "region:us" ]
null
2025-09-06T04:02:53Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - plump fast antelope --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
kaidhar/my-embedding-gemma
kaidhar
2025-09-06T04:02:53Z
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "gemma3_text", "sentence-similarity", "feature-extraction", "dense", "generated_from_trainer", "dataset_size:3", "loss:MultipleNegativesRankingLoss", "arxiv:1908.10084", "arxiv:1705.00652", "base_model:google/embeddinggemma-300m", "base_model:finetune:google/embeddinggemma-300m", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2025-09-06T04:00:55Z
--- tags: - sentence-transformers - sentence-similarity - feature-extraction - dense - generated_from_trainer - dataset_size:3 - loss:MultipleNegativesRankingLoss base_model: google/embeddinggemma-300m pipeline_tag: sentence-similarity library_name: sentence-transformers --- # SentenceTransformer based on google/embeddinggemma-300m This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [google/embeddinggemma-300m](https://huggingface.co/google/embeddinggemma-300m). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [google/embeddinggemma-300m](https://huggingface.co/google/embeddinggemma-300m) <!-- at revision 64614b0b8b64f0c6c1e52b07e4e9a4e8fe4d2da2 --> - **Maximum Sequence Length:** 2048 tokens - **Output Dimensionality:** 768 dimensions - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 2048, 'do_lower_case': False, 'architecture': 'Gemma3TextModel'}) (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Dense({'in_features': 768, 'out_features': 3072, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity'}) (3): Dense({'in_features': 3072, 'out_features': 768, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity'}) (4): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("kaidhar/my-embedding-gemma") # Run inference queries = [ "Which planet is known as the Red Planet?", ] documents = [ "Venus is often called Earth's twin because of its similar size and proximity.", 'Mars, known for its reddish appearance, is often referred to as the Red Planet.', 'Saturn, famous for its rings, is sometimes mistaken for the Red Planet.', ] query_embeddings = model.encode_query(queries) document_embeddings = model.encode_document(documents) print(query_embeddings.shape, document_embeddings.shape) # [1, 768] [3, 768] # Get the similarity scores for the embeddings similarities = model.similarity(query_embeddings, document_embeddings) print(similarities) # tensor([[0.2880, 0.6381, 0.4942]]) ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 3 training samples * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code> * Approximate statistics based on the first 3 samples: | | anchor | positive | negative | |:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------| | type | string | string | string | | details | <ul><li>min: 10 tokens</li><li>mean: 12.0 tokens</li><li>max: 15 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 15.33 tokens</li><li>max: 17 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 12.67 tokens</li><li>max: 14 tokens</li></ul> | * Samples: | anchor | positive | negative | |:--------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------| | <code>How do I open a NISA account?</code> | <code>What is the procedure for starting a new tax-free investment account?</code> | <code>I want to check the balance of my regular savings account.</code> | | <code>Are there fees for making an early repayment on a home loan?</code> | <code>If I pay back my house loan early, will there be any costs?</code> | <code>What is the management fee for this investment trust?</code> | | <code>What is the coverage for medical insurance?</code> | <code>Tell me about the benefits of the health insurance plan.</code> | <code>What is the cancellation policy for my life insurance?</code> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim", "gather_across_devices": false } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `per_device_train_batch_size`: 1 - `learning_rate`: 2e-05 - `num_train_epochs`: 5 - `warmup_ratio`: 0.1 - `prompts`: task: sentence similarity | query: #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: no - `prediction_loss_only`: True - `per_device_train_batch_size`: 1 - `per_device_eval_batch_size`: 8 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 5 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `parallelism_config`: None - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch_fused - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `hub_revision`: None - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `liger_kernel_config`: None - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: task: sentence similarity | query: - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: proportional - `router_mapping`: {} - `learning_rate_mapping`: {} </details> ### Training Logs | Epoch | Step | Training Loss | |:-----:|:----:|:-------------:| | 1.0 | 3 | 0.0483 | | 2.0 | 6 | 0.0 | | 3.0 | 9 | 0.0 | | 4.0 | 12 | 0.0 | | 5.0 | 15 | 0.0 | ### Framework Versions - Python: 3.12.11 - Sentence Transformers: 5.1.0 - Transformers: 4.57.0.dev0 - PyTorch: 2.8.0+cu126 - Accelerate: 1.10.1 - Datasets: 4.0.0 - Tokenizers: 0.22.0 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
jaideep3242/automodel
jaideep3242
2025-09-06T04:01:20Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-09-06T04:01:20Z
--- license: apache-2.0 ---
pouiiq/blockassist-bc-pensive_twitchy_ape_1757131234
pouiiq
2025-09-06T04:00:54Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "pensive twitchy ape", "arxiv:2504.07091", "region:us" ]
null
2025-09-06T04:00:35Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - pensive twitchy ape --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Ahmed-88889/Qwen2-VL-7B-Instruct_3_epoch
Ahmed-88889
2025-09-06T03:57:46Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-09-05T21:24:28Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
GrizzlyEgor/blockassist-bc-thick_silent_crow_1757128699
GrizzlyEgor
2025-09-06T03:54:41Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thick silent crow", "arxiv:2504.07091", "region:us" ]
null
2025-09-06T03:54:19Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thick silent crow --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
bah63843/blockassist-bc-plump_fast_antelope_1757130729
bah63843
2025-09-06T03:53:00Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "plump fast antelope", "arxiv:2504.07091", "region:us" ]
null
2025-09-06T03:52:52Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - plump fast antelope --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
arthinfinity/Qwen3-0.6B-Gensyn-Swarm-diving_rabid_giraffe
arthinfinity
2025-09-06T03:52:21Z
70
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am diving_rabid_giraffe", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-31T02:17:47Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am diving_rabid_giraffe --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
johngreendr1/070ddd17-6c3a-45da-9496-6ab1896d7abe
johngreendr1
2025-09-06T03:51:39Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:migtissera/Tess-v2.5-Phi-3-medium-128k-14B", "base_model:adapter:migtissera/Tess-v2.5-Phi-3-medium-128k-14B", "region:us" ]
null
2025-09-06T01:54:30Z
--- base_model: migtissera/Tess-v2.5-Phi-3-medium-128k-14B library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.1
dsaddsdsdd/blockassist-bc-stinging_darting_anteater_1757129241
dsaddsdsdd
2025-09-06T03:49:55Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "stinging darting anteater", "arxiv:2504.07091", "region:us" ]
null
2025-09-06T03:49:06Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - stinging darting anteater --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
raihannabiil/blockassist-bc-humming_rugged_viper_1757128054
raihannabiil
2025-09-06T03:49:44Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "humming rugged viper", "arxiv:2504.07091", "region:us" ]
null
2025-09-06T03:49:38Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - humming rugged viper --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).