modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-06 12:28:13
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
543 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-06 12:27:52
card
stringlengths
11
1.01M
grizzle00/blockassist-bc-curious_mimic_antelope_1757083837
grizzle00
2025-09-05T15:27:18Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "curious mimic antelope", "arxiv:2504.07091", "region:us" ]
null
2025-09-05T15:27:14Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - curious mimic antelope --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
NahedDom/blockassist-bc-flapping_stocky_leopard_1757083820
NahedDom
2025-09-05T15:26:51Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "flapping stocky leopard", "arxiv:2504.07091", "region:us" ]
null
2025-09-05T15:26:48Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - flapping stocky leopard --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mradermacher/Mimic-1.0-GGUF
mradermacher
2025-09-05T15:25:43Z
0
0
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "llama", "en", "base_model:AppliedLucent/Mimic-1.0", "base_model:quantized:AppliedLucent/Mimic-1.0", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2025-09-05T13:37:38Z
--- base_model: AppliedLucent/Mimic-1.0 language: - en library_name: transformers license: apache-2.0 mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - text-generation-inference - transformers - unsloth - llama --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/AppliedLucent/Mimic-1.0 <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Mimic-1.0-GGUF).*** weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Mimic-1.0-GGUF/resolve/main/Mimic-1.0.Q2_K.gguf) | Q2_K | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Mimic-1.0-GGUF/resolve/main/Mimic-1.0.Q3_K_S.gguf) | Q3_K_S | 6.6 | | | [GGUF](https://huggingface.co/mradermacher/Mimic-1.0-GGUF/resolve/main/Mimic-1.0.Q3_K_M.gguf) | Q3_K_M | 7.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Mimic-1.0-GGUF/resolve/main/Mimic-1.0.Q3_K_L.gguf) | Q3_K_L | 7.9 | | | [GGUF](https://huggingface.co/mradermacher/Mimic-1.0-GGUF/resolve/main/Mimic-1.0.IQ4_XS.gguf) | IQ4_XS | 8.2 | | | [GGUF](https://huggingface.co/mradermacher/Mimic-1.0-GGUF/resolve/main/Mimic-1.0.Q4_K_S.gguf) | Q4_K_S | 8.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mimic-1.0-GGUF/resolve/main/Mimic-1.0.Q4_K_M.gguf) | Q4_K_M | 9.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mimic-1.0-GGUF/resolve/main/Mimic-1.0.Q5_K_S.gguf) | Q5_K_S | 10.3 | | | [GGUF](https://huggingface.co/mradermacher/Mimic-1.0-GGUF/resolve/main/Mimic-1.0.Q5_K_M.gguf) | Q5_K_M | 10.5 | | | [GGUF](https://huggingface.co/mradermacher/Mimic-1.0-GGUF/resolve/main/Mimic-1.0.Q6_K.gguf) | Q6_K | 12.1 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Mimic-1.0-GGUF/resolve/main/Mimic-1.0.Q8_0.gguf) | Q8_0 | 15.7 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
xcvbghjh/blockassist-bc-muscular_carnivorous_okapi_1757085808
xcvbghjh
2025-09-05T15:24:27Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "muscular carnivorous okapi", "arxiv:2504.07091", "region:us" ]
null
2025-09-05T15:23:29Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - muscular carnivorous okapi --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
2hpsatt/blockassist-bc-huge_deft_eagle_1757085745
2hpsatt
2025-09-05T15:23:16Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "huge deft eagle", "arxiv:2504.07091", "region:us" ]
null
2025-09-05T15:23:05Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - huge deft eagle --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Muapi/gil-elvgren
Muapi
2025-09-05T15:22:39Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-09-05T15:22:26Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # Gil Elvgren ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: GilElren style ## 🧠 Usage (Python) 🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:1797087@2033754", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
xcvbghjh/blockassist-bc-pensive_twitchy_ape_1757085730
xcvbghjh
2025-09-05T15:22:34Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "pensive twitchy ape", "arxiv:2504.07091", "region:us" ]
null
2025-09-05T15:22:11Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - pensive twitchy ape --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Muapi/val-red-flags
Muapi
2025-09-05T15:19:10Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-09-05T15:18:31Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # Val - Red Flags ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: tc_val, crimson hair, lip piercing, earlobe piercing ## 🧠 Usage (Python) 🔑 **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:818806@915656", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
bah63843/blockassist-bc-plump_fast_antelope_1757085497
bah63843
2025-09-05T15:19:09Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "plump fast antelope", "arxiv:2504.07091", "region:us" ]
null
2025-09-05T15:19:00Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - plump fast antelope --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ibm-nasa-geospatial/Prithvi-EO-1.0-100M-burn-scar
ibm-nasa-geospatial
2025-09-05T15:19:08Z
7
29
terratorch
[ "terratorch", "Pytorch", "mmsegmentation", "segmentation", "burn scars", "Geospatial", "Foundation model", "image-segmentation", "en", "dataset:ibm-nasa-geospatial/hls_burn_scars", "license:apache-2.0", "region:us" ]
image-segmentation
2023-07-05T21:51:34Z
--- license: apache-2.0 language: - en tags: - Pytorch - mmsegmentation - segmentation - burn scars - Geospatial - Foundation model datasets: - ibm-nasa-geospatial/hls_burn_scars metrics: - accuracy - IoU - F1 Score library_name: terratorch pipeline_tag: image-segmentation --- ### Model and Inputs The pretrained [Prithvi-EO-1.0-100M](https://huggingface.co/ibm-nasa-geospatial/Prithvi-100M/blob/main/README.md) parameter model is finetuned to detect Burn Scars on HLS data from the [HLS Burn Scar Scenes dataset](https://huggingface.co/datasets/ibm-nasa-geospatial/hls_burn_scars). This dataset includes input tiles of 512x512x6, where 512 is the height and width and 6 is the number of bands. The bands are: 1. Blue 2. Green 3. Red 4. Narrow NIR 5. SWIR 1 6. SWIR 2 ![](burn_scar.png) It is important to point out that the HLS Burn Scar Scenes dataset includes a single timestep, while the Prithvi-100m was pretrained with three timesteps. The difference highlights the flexibility of this model to adapt to different downstream tasks and requirements. ### Code Code for fine-tuning is available through [Github](https://github.com/NASA-IMPACT/hls-foundation-os/tree/main/configs) Configuration used for fine-tuning is available through [config](https://github.com/NASA-IMPACT/hls-foundation-os/blob/main/configs/burn_scars.py) ). ### Results The experiment conducted by running the mmseg stack for 50 epochs using the above config led to an IoU of **0.73** on the burn scar class and **0.96** overall accuracy. It is noteworthy that this leads to a reasonably good model, but further developement will most likely improve performance. ### Inference and demo The github repo includes an inference script that allows to run the burn scar model for inference on HLS images. These inputs have to be in geotiff format, including the channels described above (Blue, Green, Red, Narrow NIR, SWIR, SWIR 2) in reflectance units [0-1]. There is also a **demo** that leverages the same code **[here](https://huggingface.co/spaces/ibm-nasa-geospatial/Prithvi-100M-Burn-scars-demo)**. ### Feedback Your feedback is invaluable to us. If you have any feedback about the model, please feel free to share it with us. You can do this by submitting issues on our open-source repository, [hls-foundation-os](https://github.com/NASA-IMPACT/hls-foundation-os/issues), on GitHub. ### Citation If this model helped your research, please cite `Prithvi-100M-burn-scar` in your publications. Here is an example BibTeX entry: ``` @misc{Prithvi-100M-burn-scar, author = {Roy, Sujit and Phillips, Christopher and Jakubik, Johannes and Fraccaro, Paolo and Ankur, Kumar and Avery, Ryan and Ji, Wei and Zadrozny, Bianca and Ramachandran, Rahul}, doi = {10.57967/hf/0953}, month = aug, title = {{Prithvi 100M burn scar}}, url = {https://huggingface.co/ibm-nasa-geospatial/Prithvi-100M-burn-scar}, year = {2023} } ```
ibm-nasa-geospatial/Prithvi-WxC-1.0-2300m-gravity-wave-parameterization
ibm-nasa-geospatial
2025-09-05T15:18:26Z
13
10
terratorch
[ "terratorch", "Pytorch", "gravity wave", "Weather & Climate", "Foundation model", "en", "dataset:Prithvi-WxC/Gravity_wave_Parameterization", "arxiv:2409.13598", "arxiv:2406.14775", "base_model:ibm-nasa-geospatial/Prithvi-WxC-1.0-2300M", "base_model:finetune:ibm-nasa-geospatial/Prithvi-WxC-1.0-2300M", "license:apache-2.0", "region:us" ]
null
2024-09-19T21:50:01Z
--- license: apache-2.0 language: - en tags: - Pytorch - gravity wave - Weather & Climate - Foundation model datasets: - Prithvi-WxC/Gravity_wave_Parameterization base_model: - Prithvi-WxC/prithvi.wxc.2300m.v1 library_name: terratorch --- This repository contains pretrained model for Gravity Wave Flux Parametrization downstream task. <img src="https://cdn-uploads.huggingface.co/production/uploads/6488f1d3e22a0081a561ec8f/lOFP_1dAVKCw90uLpj2vu.png" alt="Gravity Wave" width="1024"/> ### Model The pretrained [Prithvi WxC](https://huggingface.co/ibm-nasa-geospatial/Prithvi-WxC-1.0-2300M) parameter model is finetuned to predict momentum fluxes from the [Gravity Wave Parameterization dataset](https://huggingface.co/datasets/ibm-nasa-geospatial/gravity-wave-parameterization). <b>Input:</b> 491 (3 + 4x122) channels. 1. latitude (1) 2. longitude (1) 3. surface elevation (1) 4. zonal winds \\(u\\) (122) 5. meridional winds \\(v\\) (122) 6. 6. temperature \\(T\\) (122) 7. pressure \\(P\\) (122) <b>Output:</b> 366 (3x122) channels. 1. potential temperature \\(\theta\\) (122) 2. zonal flux of vertical momentum \\(u'\omega'\\) (122) 3. meridional flux of vertical momentum \\(v'\omega'\\) (122) ### Code Code for fine-tuning is available through [Github](https://github.com/NASA-IMPACT/gravity-wave-finetuning). ### Results <img src="https://cdn-uploads.huggingface.co/production/uploads/6488f1d3e22a0081a561ec8f/Vk1EKgzf_j90ZPiw2hGHE.png" alt="Gravity Wave" width="1024"/> For the Andes (mountain waves) and the Southern Ocean (non-mountain waves), the fine-tuned model achieves correlation coefficients of 0.99 and 0.97, respectively, when compared to the observed fluxes. ### Inference and demo The github repo includes an inference script that allows to run the [gravity_wave_model](https://huggingface.co/ibm-nasa-geospatial/Prithvi-WxC-1.0-2300m-gravity-wave-parameterization/blob/main/magnet-flux-uvtp122-epoch-99-loss-0.1022.pt) model for inference on [sample dataset](https://huggingface.co/datasets/ibm-nasa-geospatial/gravity-wave-parameterization/blob/main/wxc_input_u_v_t_p_output_theta_uw_vw_era5_training_data_hourly_2015_constant_mu_sigma_scaling05.nc). ## Citation If you use this work, consider citing our paper ``` @misc{schmude2024prithviwxcfoundationmodel, title={Prithvi WxC: Foundation Model for Weather and Climate}, author={Johannes Schmude and Sujit Roy and Will Trojak and Johannes Jakubik and Daniel Salles Civitarese and Shraddha Singh and Julian Kuehnert and Kumar Ankur and Aman Gupta and Christopher E Phillips and Romeo Kienzler and Daniela Szwarcman and Vishal Gaur and Rajat Shinde and Rohit Lal and Arlindo Da Silva and Jorge Luis Guevara Diaz and Anne Jones and Simon Pfreundschuh and Amy Lin and Aditi Sheshadri and Udaysankar Nair and Valentine Anantharaj and Hendrik Hamann and Campbell Watson and Manil Maskey and Tsengdar J Lee and Juan Bernabe Moreno and Rahul Ramachandran}, year={2024}, eprint={2409.13598}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/2409.13598}, } ``` ``` @article{gupta2024machine, title={Machine learning global simulation of nonlocal gravity wave propagation}, author={Gupta, Aman and Sheshadri, Aditi and Roy, Sujit and Gaur, Vishal and Maskey, Manil and Ramachandran, Rahul}, journal={arXiv preprint arXiv:2406.14775}, year={2024} } ```
hokpertoy/blockassist-bc-iridescent_aquatic_parrot_1757085452
hokpertoy
2025-09-05T15:17:55Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "iridescent aquatic parrot", "arxiv:2504.07091", "region:us" ]
null
2025-09-05T15:17:33Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - iridescent aquatic parrot --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ibm-nasa-geospatial/Prithvi-WxC-1.0-2300M-rollout
ibm-nasa-geospatial
2025-09-05T15:17:33Z
211
20
terratorch
[ "terratorch", "arxiv:2409.13598", "license:apache-2.0", "region:us" ]
null
2024-09-20T18:34:24Z
--- license: apache-2.0 title: README emoji: 📈 colorFrom: red colorTo: blue sdk: static pinned: false library_name: terratorch --- Prithvi WxC is a 2.3 billion parameter model trained on 160 different variables from MERRA-2 data. It has been pretrained on both forecasting and masked reconstruction objectives. I.e.~the model is capable of reconstructing atmospheric state from partial information as well as propagating state into the future. The model takes data from two timestamps as input and generates a single, possibly future, timestamp as output. Currently Prithvi WxC comes in two flavors: - `prithvi.wxc.2300m.v1` has been pretrained with a 50% masking ratio. The time delta between input timestamps is variable as is the forecast lead time. During pretraining, the input delta was chosen from [-3, -6, -9, -12] hours while the forecast lead time was chosen from [0, 6, 12, 24] hours. We recommend using `prithvi.wxc.2300m.v1` for generic use cases that do not focus on forecasting. - (This model) `prithvi.wxc.rollout.2300m.v1` has been through further training cycles to be optimzed for autoregressive rollout. Here, we restricted the input delta as well as the lead time to 6 hours. We recommend using `prithvi.wxc.rollout.2300m.v1` for forecasting applications. <div style="display: flex; justify-content: center;"> <b> Zero-Shot Rollout</b> <img src="https://huggingface.co/Prithvi-WxC/prithvi.wxc.rollout.2300m.v1/resolve/bffd73a5b80904a4a1c8637a9f3bb35d32ce3382/2021C4Ida_2021082700_2plots_winds_platecaree_nohurr.gif" alt="Rollout" width="1024"/> <br> <img src="https://huggingface.co/Prithvi-WxC/prithvi.wxc.rollout.2300m.v1/resolve/bffd73a5b80904a4a1c8637a9f3bb35d32ce3382/2021C4Ida_2021082700_2plots_winds_platecaree_CONUS.gif" alt="Rollout_hurr" width="1024"/> </div> ## Citation If you use this work, consider citing our paper ``` @misc{schmude2024prithviwxcfoundationmodel, title={Prithvi WxC: Foundation Model for Weather and Climate}, author={Johannes Schmude and Sujit Roy and Will Trojak and Johannes Jakubik and Daniel Salles Civitarese and Shraddha Singh and Julian Kuehnert and Kumar Ankur and Aman Gupta and Christopher E Phillips and Romeo Kienzler and Daniela Szwarcman and Vishal Gaur and Rajat Shinde and Rohit Lal and Arlindo Da Silva and Jorge Luis Guevara Diaz and Anne Jones and Simon Pfreundschuh and Amy Lin and Aditi Sheshadri and Udaysankar Nair and Valentine Anantharaj and Hendrik Hamann and Campbell Watson and Manil Maskey and Tsengdar J Lee and Juan Bernabe Moreno and Rahul Ramachandran}, year={2024}, eprint={2409.13598}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/2409.13598}, } ```
mradermacher/Austral-Xgen-9B-Base-GGUF
mradermacher
2025-09-05T15:16:52Z
0
0
transformers
[ "transformers", "gguf", "en", "base_model:Delta-Vector/Austral-Xgen-9B-Base", "base_model:quantized:Delta-Vector/Austral-Xgen-9B-Base", "endpoints_compatible", "region:us", "conversational" ]
null
2025-09-05T14:10:26Z
--- base_model: Delta-Vector/Austral-Xgen-9B-Base language: - en library_name: transformers mradermacher: readme_rev: 1 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/Delta-Vector/Austral-Xgen-9B-Base <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Austral-Xgen-9B-Base-GGUF).*** weighted/imatrix quants are available at https://huggingface.co/mradermacher/Austral-Xgen-9B-Base-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Austral-Xgen-9B-Base-GGUF/resolve/main/Austral-Xgen-9B-Base.Q2_K.gguf) | Q2_K | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Austral-Xgen-9B-Base-GGUF/resolve/main/Austral-Xgen-9B-Base.Q3_K_S.gguf) | Q3_K_S | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/Austral-Xgen-9B-Base-GGUF/resolve/main/Austral-Xgen-9B-Base.Q3_K_M.gguf) | Q3_K_M | 5.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Austral-Xgen-9B-Base-GGUF/resolve/main/Austral-Xgen-9B-Base.Q3_K_L.gguf) | Q3_K_L | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Austral-Xgen-9B-Base-GGUF/resolve/main/Austral-Xgen-9B-Base.IQ4_XS.gguf) | IQ4_XS | 6.0 | | | [GGUF](https://huggingface.co/mradermacher/Austral-Xgen-9B-Base-GGUF/resolve/main/Austral-Xgen-9B-Base.Q4_K_S.gguf) | Q4_K_S | 6.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Austral-Xgen-9B-Base-GGUF/resolve/main/Austral-Xgen-9B-Base.Q4_K_M.gguf) | Q4_K_M | 6.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Austral-Xgen-9B-Base-GGUF/resolve/main/Austral-Xgen-9B-Base.Q5_K_S.gguf) | Q5_K_S | 7.5 | | | [GGUF](https://huggingface.co/mradermacher/Austral-Xgen-9B-Base-GGUF/resolve/main/Austral-Xgen-9B-Base.Q5_K_M.gguf) | Q5_K_M | 7.7 | | | [GGUF](https://huggingface.co/mradermacher/Austral-Xgen-9B-Base-GGUF/resolve/main/Austral-Xgen-9B-Base.Q6_K.gguf) | Q6_K | 8.8 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Austral-Xgen-9B-Base-GGUF/resolve/main/Austral-Xgen-9B-Base.Q8_0.gguf) | Q8_0 | 11.4 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Austral-Xgen-9B-Base-GGUF/resolve/main/Austral-Xgen-9B-Base.f16.gguf) | f16 | 21.4 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
qgallouedec/Qwen3-14B-SFT-20250905151108
qgallouedec
2025-09-05T15:14:26Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "hf_jobs", "trl", "sft", "dataset:trl-lib/Capybara", "base_model:Qwen/Qwen3-8B", "base_model:finetune:Qwen/Qwen3-8B", "endpoints_compatible", "region:us" ]
null
2025-09-05T15:12:07Z
--- base_model: Qwen/Qwen3-8B datasets: trl-lib/Capybara library_name: transformers model_name: Qwen3-14B-SFT-20250905151108 tags: - generated_from_trainer - hf_jobs - trl - sft licence: license --- # Model Card for Qwen3-14B-SFT-20250905151108 This model is a fine-tuned version of [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) on the [trl-lib/Capybara](https://huggingface.co/datasets/trl-lib/Capybara) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="qgallouedec/Qwen3-14B-SFT-20250905151108", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.23.0.dev0 - Transformers: 4.56.1 - Pytorch: 2.8.0+cu128 - Datasets: 4.0.0 - Tokenizers: 0.22.0 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
cwayneconnor/blockassist-bc-mute_loud_lynx_1757085103
cwayneconnor
2025-09-05T15:13:14Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "mute loud lynx", "arxiv:2504.07091", "region:us" ]
null
2025-09-05T15:12:36Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - mute loud lynx --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
kittygirlhere/blockassist-bc-twitchy_beaked_coral_1757085118
kittygirlhere
2025-09-05T15:12:47Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "twitchy beaked coral", "arxiv:2504.07091", "region:us" ]
null
2025-09-05T15:12:42Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - twitchy beaked coral --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
sirev/gemma3-4b-exp-Q4_K_M-GGUF
sirev
2025-09-05T15:12:28Z
0
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "base_model:sirev/gemma3-4b-exp", "base_model:quantized:sirev/gemma3-4b-exp", "endpoints_compatible", "region:us" ]
null
2025-09-05T15:12:10Z
--- base_model: sirev/gemma3-4b-exp tags: - llama-cpp - gguf-my-repo --- # sirev/gemma3-4b-exp-Q4_K_M-GGUF This model was converted to GGUF format from [`sirev/gemma3-4b-exp`](https://huggingface.co/sirev/gemma3-4b-exp) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/sirev/gemma3-4b-exp) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo sirev/gemma3-4b-exp-Q4_K_M-GGUF --hf-file gemma3-4b-exp-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo sirev/gemma3-4b-exp-Q4_K_M-GGUF --hf-file gemma3-4b-exp-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo sirev/gemma3-4b-exp-Q4_K_M-GGUF --hf-file gemma3-4b-exp-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo sirev/gemma3-4b-exp-Q4_K_M-GGUF --hf-file gemma3-4b-exp-q4_k_m.gguf -c 2048 ```
hokpertoy/blockassist-bc-silky_diving_viper_1757084991
hokpertoy
2025-09-05T15:10:13Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "silky diving viper", "arxiv:2504.07091", "region:us" ]
null
2025-09-05T15:09:52Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - silky diving viper --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
helmutsukocok/blockassist-bc-loud_scavenging_kangaroo_1757083145
helmutsukocok
2025-09-05T15:05:32Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "loud scavenging kangaroo", "arxiv:2504.07091", "region:us" ]
null
2025-09-05T15:05:28Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - loud scavenging kangaroo --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
momomuio/blockassist-bc-bipedal_lanky_meerkat_1757084671
momomuio
2025-09-05T15:04:57Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "bipedal lanky meerkat", "arxiv:2504.07091", "region:us" ]
null
2025-09-05T15:04:32Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - bipedal lanky meerkat --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
qgallouedec/Qwen3-14B-SFT-20250905150035
qgallouedec
2025-09-05T15:03:12Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "hf_jobs", "trl", "sft", "dataset:trl-lib/Capybara", "base_model:Qwen/Qwen3-8B", "base_model:finetune:Qwen/Qwen3-8B", "endpoints_compatible", "region:us" ]
null
2025-09-05T15:01:38Z
--- base_model: Qwen/Qwen3-8B datasets: trl-lib/Capybara library_name: transformers model_name: Qwen3-14B-SFT-20250905150035 tags: - generated_from_trainer - hf_jobs - trl - sft licence: license --- # Model Card for Qwen3-14B-SFT-20250905150035 This model is a fine-tuned version of [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) on the [trl-lib/Capybara](https://huggingface.co/datasets/trl-lib/Capybara) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="qgallouedec/Qwen3-14B-SFT-20250905150035", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.23.0.dev0 - Transformers: 4.56.1 - Pytorch: 2.8.0+cu128 - Datasets: 4.0.0 - Tokenizers: 0.22.0 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
qgallouedec/Qwen3-14B-SFT-20250905150036
qgallouedec
2025-09-05T15:03:10Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "sft", "trl", "hf_jobs", "dataset:trl-lib/Capybara", "base_model:Qwen/Qwen3-8B", "base_model:finetune:Qwen/Qwen3-8B", "endpoints_compatible", "region:us" ]
null
2025-09-05T15:01:38Z
--- base_model: Qwen/Qwen3-8B datasets: trl-lib/Capybara library_name: transformers model_name: Qwen3-14B-SFT-20250905150036 tags: - generated_from_trainer - sft - trl - hf_jobs licence: license --- # Model Card for Qwen3-14B-SFT-20250905150036 This model is a fine-tuned version of [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) on the [trl-lib/Capybara](https://huggingface.co/datasets/trl-lib/Capybara) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="qgallouedec/Qwen3-14B-SFT-20250905150036", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.23.0.dev0 - Transformers: 4.56.1 - Pytorch: 2.8.0+cu128 - Datasets: 4.0.0 - Tokenizers: 0.22.0 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
hokpertoy/blockassist-bc-scented_slimy_toad_1757084532
hokpertoy
2025-09-05T15:02:37Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "scented slimy toad", "arxiv:2504.07091", "region:us" ]
null
2025-09-05T15:02:13Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - scented slimy toad --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Woutermans/zeta-0-5b-sft-lora
Woutermans
2025-09-05T15:02:36Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2", "trl", "en", "base_model:unsloth/Qwen2.5-Coder-0.5B", "base_model:finetune:unsloth/Qwen2.5-Coder-0.5B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-09-05T15:02:24Z
--- base_model: unsloth/Qwen2.5-Coder-0.5B tags: - text-generation-inference - transformers - unsloth - qwen2 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Woutermans - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2.5-Coder-0.5B This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
kittygirlhere/blockassist-bc-twitchy_beaked_coral_1757084446
kittygirlhere
2025-09-05T15:01:48Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "twitchy beaked coral", "arxiv:2504.07091", "region:us" ]
null
2025-09-05T15:01:42Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - twitchy beaked coral --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
momomuio/blockassist-bc-foxy_reclusive_bear_1757084478
momomuio
2025-09-05T15:01:43Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "foxy reclusive bear", "arxiv:2504.07091", "region:us" ]
null
2025-09-05T15:01:18Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - foxy reclusive bear --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
pidbu/blockassist-bc-whistling_alert_shrew_1757084293
pidbu
2025-09-05T14:59:41Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "whistling alert shrew", "arxiv:2504.07091", "region:us" ]
null
2025-09-05T14:58:58Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - whistling alert shrew --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
original-Dr-wong-lu-yang-video-viral-clip/New.full.videos.Dr.wong.Viral.Video.Official.Tutorial
original-Dr-wong-lu-yang-video-viral-clip
2025-09-05T14:59:17Z
0
0
null
[ "region:us" ]
null
2025-09-05T14:59:03Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
vendi11/blockassist-bc-placid_placid_llama_1757084092
vendi11
2025-09-05T14:55:35Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "placid placid llama", "arxiv:2504.07091", "region:us" ]
null
2025-09-05T14:55:31Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - placid placid llama --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
VIDEOS-18-DR-WONG-LU-YANG-CCTV-VIRAL-LINK/ORIGINAL.FULL.VIDEO.DR.WONG.LU.YANG.CCTV.VIRAL.VIDEO.Official.Tutorial
VIDEOS-18-DR-WONG-LU-YANG-CCTV-VIRAL-LINK
2025-09-05T14:55:20Z
0
0
null
[ "region:us" ]
null
2025-09-05T14:54:59Z
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
momomuio/blockassist-bc-pudgy_thriving_okapi_1757083973
momomuio
2025-09-05T14:53:17Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "pudgy thriving okapi", "arxiv:2504.07091", "region:us" ]
null
2025-09-05T14:52:54Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - pudgy thriving okapi --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
CodeAtCMU/Llama-3.2-3B_full_sft_code_data_120K_remove_comments
CodeAtCMU
2025-09-05T14:52:51Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-05T14:52:22Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
openbmb/MiniCPM4.1-8B
openbmb
2025-09-05T14:51:57Z
4
3
transformers
[ "transformers", "safetensors", "minicpm", "text-generation", "conversational", "custom_code", "zh", "en", "arxiv:2506.07900", "license:apache-2.0", "autotrain_compatible", "region:us" ]
text-generation
2025-09-02T07:14:25Z
--- license: apache-2.0 language: - zh - en pipeline_tag: text-generation library_name: transformers --- <div align="center"> <img src="https://github.com/OpenBMB/MiniCPM/blob/main/assets/minicpm_logo.png?raw=true" width="500em" ></img> </div> <p align="center"> <a href="https://github.com/OpenBMB/MiniCPM/" target="_blank">GitHub Repo</a> | <a href="https://arxiv.org/abs/2506.07900" target="_blank">Technical Report</a> | <a href="https://mp.weixin.qq.com/s/KIhH2nCURBXuFXAtYRpuXg?poc_token=HBIsUWijxino8oJ5s6HcjcfXFRi0Xj2LJlxPYD9c">Join Us</a> </p> <p align="center"> 👋 Contact us in <a href="https://discord.gg/3cGQn9b3YM" target="_blank">Discord</a> and <a href="https://github.com/OpenBMB/MiniCPM/blob/main/assets/wechat.jpg" target="_blank">WeChat</a> </p> ## What's New - [2025.09.05] **MiniCPM4.1** series are released! This series is a hybrid reasoning model, which can be used in both deep reasoning mode and non-reasoning mode. 🔥🔥🔥 - [2025.06.06] **MiniCPM4** series are released! This model achieves ultimate efficiency improvements while maintaining optimal performance at the same scale! It can achieve over 5x generation acceleration on typical end-side chips! You can find technical report [here](https://arxiv.org/abs/2506.07900).🔥🔥🔥 ## MiniCPM4 and MiniCPM4.1 Series MiniCPM4 and MiniCPM4.1 series are highly efficient large language models (LLMs) designed explicitly for end-side devices, which achieves this efficiency through systematic innovation in four key dimensions: model architecture, training data, training algorithms, and inference systems. - [MiniCPM4.1-8B](https://huggingface.co/openbmb/MiniCPM4.1-8B): The latest version of MiniCPM4, with 8B parameters, support fusion thinking. (**<-- you are here**) - [MiniCPM4.1-8B-GPTQ](https://huggingface.co/openbmb/MiniCPM4.1-8B-GPTQ): MiniCPM4.1-8B in GPTQ format. - [MiniCPM4.1-8B-AutoAWQ](https://huggingface.co/openbmb/MiniCPM4.1-8B-AutoAWQ): MiniCPM4.1-8B in AutoAWQ format. - [MiniCPM-4.1-8B-Marlin](https://huggingface.co/openbmb/MiniCPM-4.1-8B-Marlin): MiniCPM4.1-8B in Marlin format. - [MiniCPM4.1-8B-GGUF](https://huggingface.co/openbmb/MiniCPM4.1-8B-GGUF): MiniCPM4.1-8B in GGUF format. - [MiniCPM4.1-8B-MLX](https://huggingface.co/openbmb/MiniCPM4.1-8B-MLX): MiniCPM4.1-8B in MLX format. - [MiniCPM4.1-8B-Eagle3](https://huggingface.co/openbmb/MiniCPM4.1-8B-Eagle3): Eagle3 model for MiniCPM4.1-8B. - **MiniCPM4 Series** <details> <summary>Click to expand all MiniCPM4 series models</summary> - [**MiniCPM4-8B**](https://huggingface.co/openbmb/MiniCPM4-8B): The flagship model with 8B parameters, trained on 8T tokens - [**MiniCPM4-0.5B**](https://huggingface.co/openbmb/MiniCPM4-0.5B): Lightweight version with 0.5B parameters, trained on 1T tokens - [**MiniCPM4-8B-Eagle-FRSpec**](https://huggingface.co/openbmb/MiniCPM4-8B-Eagle-FRSpec): Eagle head for FRSpec, accelerating speculative inference - [**MiniCPM4-8B-Eagle-FRSpec-QAT-cpmcu**](https://huggingface.co/openbmb/MiniCPM4-8B-Eagle-FRSpec-QAT-cpmcu): Eagle head with QAT for FRSpec, integrating speculation and quantization for ultra acceleration - [**MiniCPM4-8B-Eagle-vLLM**](https://huggingface.co/openbmb/MiniCPM4-8B-Eagle-vLLM): Eagle head in vLLM format for speculative inference - [**MiniCPM4-8B-marlin-Eagle-vLLM**](https://huggingface.co/openbmb/MiniCPM4-8B-marlin-Eagle-vLLM): Quantized Eagle head for vLLM format - [**BitCPM4-0.5B**](https://huggingface.co/openbmb/BitCPM4-0.5B): Extreme ternary quantization of MiniCPM4-0.5B, achieving 90% bit width reduction - [**BitCPM4-1B**](https://huggingface.co/openbmb/BitCPM4-1B): Extreme ternary quantization of MiniCPM3-1B, achieving 90% bit width reduction - [**MiniCPM4-Survey**](https://huggingface.co/openbmb/MiniCPM4-Survey): Generates trustworthy, long-form survey papers from user queries - [**MiniCPM4-MCP**](https://huggingface.co/openbmb/MiniCPM4-MCP): Integrates MCP tools to autonomously satisfy user requirements </details> ## Introduction MiniCPM4 and MiniCPM4.1 are extremely efficient edge-side large model that has undergone efficient optimization across four dimensions: model architecture, learning algorithms, training data, and inference systems, achieving ultimate efficiency improvements. - 🏗️ **Efficient Model Architecture:** - InfLLM v2 -- Trainable Sparse Attention Mechanism: Adopts a trainable sparse attention mechanism architecture where each token only needs to compute relevance with less than 5% of tokens in 128K long text processing, significantly reducing computational overhead for long texts - 🧠 **Efficient Learning Algorithms:** - Model Wind Tunnel 2.0 -- Efficient Predictable Scaling: Introduces scaling prediction methods for performance of downstream tasks, enabling more precise model training configuration search - BitCPM -- Ultimate Ternary Quantization: Compresses model parameter bit-width to 3 values, achieving 90% extreme model bit-width reduction - Efficient Training Engineering Optimization: Adopts FP8 low-precision computing technology combined with Multi-token Prediction training strategy - 📚 **High-Quality Training Data:** - UltraClean -- High-quality Pre-training Data Filtering and Generation: Builds iterative data cleaning strategies based on efficient data verification, open-sourcing high-quality Chinese and English pre-training dataset [UltraFinweb](https://huggingface.co/datasets/openbmb/Ultra-FineWeb) - UltraChat v2 -- High-quality Supervised Fine-tuning Data Generation: Constructs large-scale high-quality supervised fine-tuning datasets covering multiple dimensions including knowledge-intensive data, reasoning-intensive data, instruction-following data, long text understanding data, and tool calling data - ⚡ **Efficient Inference System:** - CPM.cu -- Lightweight and Efficient CUDA Inference Framework: Integrates sparse attention, model quantization, and speculative sampling to achieve efficient prefilling and decoding - ArkInfer -- Cross-platform Deployment System: Supports efficient deployment across multiple backend environments, providing flexible cross-platform adaptation capabilities ## Usage ### Inference with [CPM.cu](https://github.com/OpenBMB/cpm.cu) We recommend using [CPM.cu](https://github.com/OpenBMB/cpm.cu) for the inference of MiniCPM4 and MiniCPM4.1. CPM.cu is a CUDA inference framework developed by OpenBMB, which integrates efficient sparse, speculative sampling, and quantization techniques, fully leveraging the efficiency advantages of MiniCPM4 and MiniCPM4.1. You can install CPM.cu by running the following command: ```bash git clone https://github.com/OpenBMB/cpm.cu.git --recursive cd cpm.cu python3 setup.py install ``` MiniCPM4.1 natively supports context lengths of up to 65,536(64k) tokens. To reproduce the long-text acceleration effect in the paper, we recommend using the LongRoPE factors that have been validated. Change the `rope_scaling` field in the `config.json` file as the following to enable LongRoPE. ```json { ..., "rope_scaling": { "rope_type": "longrope", "long_factor": [0.9982316082870437, 1.033048153422584, 1.0749920956484724, 1.1255096879436193, 1.1863348602111476, 1.259543828902579, 1.3476188888731149, 1.4535223827776373, 1.5807816745852985, 1.7335856049489526, 1.9168922912975785, 2.1365471404135326, 2.3994084200118646, 2.713475511863602, 3.0880118452194134, 3.533650295140154, 4.062463396503134, 4.687974098908333, 5.425075306704039, 6.289818967956352, 7.29902962722721, 8.6357018163639, 10.210822723989212, 12.053807765671676, 14.193944598909404, 16.65780676784363, 19.463620727694074, 22.628311203524586, 26.150106147261315, 30.02526691405111, 34.23183327975347, 38.73811934094828, 43.502489489729555, 48.47627117965394, 53.61139491762471, 58.857366522037935, 64.16798299215064, 69.51359464319125, 74.86555458220285, 80.21497790341579, 85.55322183307433, 90.89611806932027, 96.26245306514224, 101.68269304046481, 107.18619510219668, 112.82253283014026, 118.63764063163615, 119.88866203644656, 120.9462882391725, 121.837565139014, 122.58663780572562, 123.2147719894291, 123.74049454862576, 124.17980424685767, 124.54641761955492, 124.85202548028222, 125.10654406389756, 125.31835105170659, 125.49450117164764, 125.64091910903052, 125.76256945356558, 125.86360463815589, 125.94749252260765, 126.01712561287873], "short_factor": [0.9982316082870437, 1.033048153422584, 1.0749920956484724, 1.1255096879436193, 1.1863348602111476, 1.259543828902579, 1.3476188888731149, 1.4535223827776373, 1.5807816745852985, 1.7335856049489526, 1.9168922912975785, 2.1365471404135326, 2.3994084200118646, 2.713475511863602, 3.0880118452194134, 3.533650295140154, 4.062463396503134, 4.687974098908333, 5.425075306704039, 6.289818967956352, 7.29902962722721, 8.6357018163639, 10.210822723989212, 12.053807765671676, 14.193944598909404, 16.65780676784363, 19.463620727694074, 22.628311203524586, 26.150106147261315, 30.02526691405111, 34.23183327975347, 38.73811934094828, 43.502489489729555, 48.47627117965394, 53.61139491762471, 58.857366522037935, 64.16798299215064, 69.51359464319125, 74.86555458220285, 80.21497790341579, 85.55322183307433, 90.89611806932027, 96.26245306514224, 101.68269304046481, 107.18619510219668, 112.82253283014026, 118.63764063163615, 119.88866203644656, 120.9462882391725, 121.837565139014, 122.58663780572562, 123.2147719894291, 123.74049454862576, 124.17980424685767, 124.54641761955492, 124.85202548028222, 125.10654406389756, 125.31835105170659, 125.49450117164764, 125.64091910903052, 125.76256945356558, 125.86360463815589, 125.94749252260765, 126.01712561287873], "original_max_position_embeddings": 65536 } } ``` After modification, you can run the following command to reproduce the long-context acceleration effect (the script will automatically download the model weights from HuggingFace) ```bash python3 tests/test_generate.py ``` You can run the following command to infer with EAGLE3 speculative decoding algorithm. ```bash python3 -m cpmcu.cli \ --model-path $BASE_MODEL_PATH \ --draft-model-path $EAGLE3_DRAFT_MODEL_PATH \ --prompt-text "Write an article about Artificial Intelligence." \ --use-eagle3 true ``` For more details about CPM.cu, please refer to [the repo CPM.cu](https://github.com/OpenBMB/cpm.cu). ### Hybird Reasoning Mode MiniCPM4.1 supports hybrid reasoning mode, which can be used in both deep reasoning mode and non-reasoning mode. To enable hybrid reasoning mode. User can set `enable_thinking=True` in `tokenizer.apply_chat_template` to enable hybrid reasoning mode, and set `enable_thinking=False` to enable non-reasoning mode. Similarly, user can directly add `/no_think` at the end of the query to enable non-reasoning mode. If not add any special token or add `/think` at the end of the query, the model will enable reasoning mode. ```python # Enable reasoning mode prompt_text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, enable_thinking=True ) # Enable non-reasoning mode prompt_text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, enable_thinking=False ) ``` ### Inference with Transformers ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch torch.manual_seed(0) path = 'openbmb/MiniCPM4.1-8B' device = "cuda" tokenizer = AutoTokenizer.from_pretrained(path) model = AutoModelForCausalLM.from_pretrained(path, torch_dtype=torch.bfloat16, device_map=device, trust_remote_code=True) # User can directly use the chat interface # responds, history = model.chat(tokenizer, "Write an article about Artificial Intelligence.", temperature=0.7, top_p=0.7) # print(responds) # User can also use the generate interface messages = [ {"role": "user", "content": "Write an article about Artificial Intelligence."}, ] prompt_text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, ) model_inputs = tokenizer([prompt_text], return_tensors="pt").to(device) model_outputs = model.generate( **model_inputs, max_new_tokens=32768, top_p=0.95, temperature=0.6 ) output_token_ids = [ model_outputs[i][len(model_inputs[i]):] for i in range(len(model_inputs['input_ids'])) ] responses = tokenizer.batch_decode(output_token_ids, skip_special_tokens=True)[0] print(responses) ``` MiniCPM4.1-8B supports `InfLLM v2`, a sparse attention mechanism designed for efficient long-sequence inference. It requires the [infllmv2_cuda_impl](https://github.com/OpenBMB/infllmv2_cuda_impl) library. You can install it by running the following command: ```bash git clone -b feature_infer https://github.com/OpenBMB/infllmv2_cuda_impl.git cd infllmv2_cuda_impl git submodule update --init --recursive pip install -e . # or python setup.py install ``` To enable InfLLM v2, you need to add the `sparse_config` field in `config.json`: ```json { ..., "sparse_config": { "kernel_size": 32, "kernel_stride": 16, "init_blocks": 1, "block_size": 64, "window_size": 2048, "topk": 64, "use_nope": false, "dense_len": 8192 } } ``` These parameters control the behavior of InfLLM v2: * `kernel_size` (default: 32): The size of semantic kernels. * `kernel_stride` (default: 16): The stride between adjacent kernels. * `init_blocks` (default: 1): The number of initial blocks that every query token attends to. This ensures attention to the beginning of the sequence. * `block_size` (default: 64): The block size for key-value blocks. * `window_size` (default: 2048): The size of the local sliding window. * `topk` (default: 64): The specifies that each token computes attention with only the top-k most relevant key-value blocks. * `use_nope` (default: false): Whether to use the NOPE technique in block selection for improved performance. * `dense_len` (default: 8192): Since Sparse Attention offers limited benefits for short sequences, the model can use standard (dense) attention for shorter texts. The model will use dense attention for sequences with a token length below `dense_len` and switch to sparse attention for sequences exceeding this length. Set this to `-1` to always use sparse attention regardless of sequence length. MiniCPM4.1 natively supports context lengths of up to 65,536(64k) tokens. For conversations where the total length (including both input and output) significantly exceeds this limit, we recommend using RoPE scaling techniques for effective handling of long texts. We have validated the model's performance on context lengths of up to 131,072 tokens by modifying the LongRoPE factor. You can apply the LongRoPE factor modification by modifying the model files. Specifically, in the `config.json` file, adjust the `rope_scaling` fields. ```json { ..., "rope_scaling": { "rope_type": "longrope", "long_factor": [0.9982316082870437, 1.033048153422584, 1.0749920956484724, 1.1255096879436193, 1.1863348602111476, 1.259543828902579, 1.3476188888731149, 1.4535223827776373, 1.5807816745852985, 1.7335856049489526, 1.9168922912975785, 2.1365471404135326, 2.3994084200118646, 2.713475511863602, 3.0880118452194134, 3.533650295140154, 4.062463396503134, 4.687974098908333, 5.425075306704039, 6.289818967956352, 7.29902962722721, 8.6357018163639, 10.210822723989212, 12.053807765671676, 14.193944598909404, 16.65780676784363, 19.463620727694074, 22.628311203524586, 26.150106147261315, 30.02526691405111, 34.23183327975347, 38.73811934094828, 43.502489489729555, 48.47627117965394, 53.61139491762471, 58.857366522037935, 64.16798299215064, 69.51359464319125, 74.86555458220285, 80.21497790341579, 85.55322183307433, 90.89611806932027, 96.26245306514224, 101.68269304046481, 107.18619510219668, 112.82253283014026, 118.63764063163615, 119.88866203644656, 120.9462882391725, 121.837565139014, 122.58663780572562, 123.2147719894291, 123.74049454862576, 124.17980424685767, 124.54641761955492, 124.85202548028222, 125.10654406389756, 125.31835105170659, 125.49450117164764, 125.64091910903052, 125.76256945356558, 125.86360463815589, 125.94749252260765, 126.01712561287873], "short_factor": [0.9982316082870437, 1.033048153422584, 1.0749920956484724, 1.1255096879436193, 1.1863348602111476, 1.259543828902579, 1.3476188888731149, 1.4535223827776373, 1.5807816745852985, 1.7335856049489526, 1.9168922912975785, 2.1365471404135326, 2.3994084200118646, 2.713475511863602, 3.0880118452194134, 3.533650295140154, 4.062463396503134, 4.687974098908333, 5.425075306704039, 6.289818967956352, 7.29902962722721, 8.6357018163639, 10.210822723989212, 12.053807765671676, 14.193944598909404, 16.65780676784363, 19.463620727694074, 22.628311203524586, 26.150106147261315, 30.02526691405111, 34.23183327975347, 38.73811934094828, 43.502489489729555, 48.47627117965394, 53.61139491762471, 58.857366522037935, 64.16798299215064, 69.51359464319125, 74.86555458220285, 80.21497790341579, 85.55322183307433, 90.89611806932027, 96.26245306514224, 101.68269304046481, 107.18619510219668, 112.82253283014026, 118.63764063163615, 119.88866203644656, 120.9462882391725, 121.837565139014, 122.58663780572562, 123.2147719894291, 123.74049454862576, 124.17980424685767, 124.54641761955492, 124.85202548028222, 125.10654406389756, 125.31835105170659, 125.49450117164764, 125.64091910903052, 125.76256945356558, 125.86360463815589, 125.94749252260765, 126.01712561287873], "original_max_position_embeddings": 65536 } } ``` ### Inference with [SGLang](https://github.com/sgl-project/sglang) #### Speculative Decoding For accelerated inference with speculative decoding, follow these steps: ##### 1. Download MiniCPM4.1 Draft Model First, download the MiniCPM4.1 draft model: ```bash cd /your_path git clone https://huggingface.co/openbmb/MiniCPM4.1-8B-Eagle3 ``` ##### 2. Install EAGLE3-Compatible SGLang The EAGLE3 adaptation PR has been submitted. For now, use our repository for installation: ```bash git clone https://github.com/LDLINGLINGLING/sglang.git cd sglang pip install -e . ``` ##### 3. Launch SGLang Server with Speculative Decoding Start the SGLang server with speculative decoding enabled: ```bash python -m sglang.launch_server \ --model-path "openbmb/MiniCPM4.1-8B" \ --host "127.0.0.1" \ --port 30002 \ --mem-fraction-static 0.9 \ --speculative-algorithm EAGLE3 \ --speculative-draft-model-path "your/path/MiniCPM4_1-8B-Eagle3-bf16" \ --speculative-num-steps 3 \ --speculative-eagle-topk 1 \ --speculative-num-draft-tokens 32 \ --temperature 0.7 ``` ##### 4. Client Usage The client usage remains the same for both standard and speculative decoding: ```python import openai client = openai.Client(base_url=f"http://localhost:30002/v1", api_key="None") response = client.chat.completions.create( model="openbmb/MiniCPM4.1-8B", messages=[ {"role": "user", "content": "Write an article about Artificial Intelligence."}, ], temperature=0.6, max_tokens=32768, ) print(response.choices[0].message.content) ``` Note: Make sure to update the port number in the client code to match the server port (30002 in the speculative decoding example). ##### Configuration Parameters - `--speculative-algorithm EAGLE3`: Enables EAGLE3 speculative decoding - `--speculative-draft-model-path`: Path to the draft model for speculation - `--speculative-num-steps`: Number of speculative steps (default: 3) - `--speculative-eagle-topk`: Top-k parameter for EAGLE (default: 1) - `--speculative-num-draft-tokens`: Number of draft tokens (default: 32) - `--mem-fraction-static`: Memory fraction for static allocation (default: 0.9) #### Standard Inference (Without Speculative Decoding) For now, you need to install our forked version of SGLang. ```bash git clone -b openbmb https://github.com/OpenBMB/sglang.git cd sglang pip install --upgrade pip pip install -e "python[all]" ``` You can start the inference server by running the following command: ```bash python -m sglang.launch_server --model openbmb/MiniCPM4.1-8B --trust-remote-code --port 30000 --chat-template chatml ``` Then you can use the chat interface by running the following command: ```python import openai client = openai.Client(base_url=f"http://localhost:30000/v1", api_key="None") response = client.chat.completions.create( model="openbmb/MiniCPM4.1-8B", messages=[ {"role": "user", "content": "Write an article about Artificial Intelligence."}, ], temperature=0.6, max_tokens=32768, ) print(response.choices[0].message.content) ``` ### Inference with [vLLM](https://github.com/vllm-project/vllm) #### Speculative Decoding For accelerated inference with speculative decoding using vLLM, follow these steps: ##### 1. Download MiniCPM4.1 Draft Model First, download the MiniCPM4.1 draft model: ```bash cd /your_path git clone https://huggingface.co/openbmb/MiniCPM4.1-8B-Eagle3 ``` ##### 2. Install EAGLE3-Compatible vLLM The EAGLE3 vLLM PR has been submitted. For now, use our repository for installation: ```bash git clone https://github.com/LDLINGLINGLING/vllm.git cd vllm pip install -e . ``` ##### 3. Launch vLLM Server with Speculative Decoding Start the vLLM inference server with speculative decoding enabled. Make sure to update the model path in the speculative-config to point to your downloaded MiniCPM4_1-8B-Eagle3-bf16 folder: ```bash VLLM_USE_V1=1 \ vllm serve openbmb/MiniCPM4.1-8B \ --seed 42 \ --trust-remote-code \ --speculative-config '{ "model": "your/path/MiniCPM4_1-8B-Eagle3-bf16", "num_speculative_tokens": 3, "method": "eagle3", "draft_tensor_parallel_size": 1 }' ``` ##### 4. Client Usage Example The client usage remains the same for both standard and speculative decoding: ```python import openai client = openai.Client(base_url="http://localhost:8000/v1", api_key="EMPTY") response = client.chat.completions.create( model="openbmb/MiniCPM4.1-8B", messages=[ {"role": "user", "content": "Write an article about Artificial Intelligence."}, ], temperature=0.6, max_tokens=32768, extra_body=dict(add_special_tokens=True), # Ensures special tokens are added for chat template ) print(response.choices[0].message.content) ``` ##### vLLM Configuration Parameters - `VLLM_USE_V1=1`: Enables vLLM v1 API - `--speculative-config`: JSON configuration for speculative decoding - `model`: Path to the draft model for speculation - `num_speculative_tokens`: Number of speculative tokens (default: 3) - `method`: Speculative decoding method (eagle3) - `draft_tensor_parallel_size`: Tensor parallel size for draft model (default: 1) - `--seed`: Random seed for reproducibility - `--trust-remote-code`: Allow execution of remote code for custom models #### Standard Inference (Without Speculative Decoding) For now, you need to install the latest version of vLLM. ```bash pip install -U vllm \ --pre \ --extra-index-url https://wheels.vllm.ai/nightly ``` Then you can inference MiniCPM4.1-8B with vLLM: ```python from transformers import AutoTokenizer from vllm import LLM, SamplingParams model_name = "openbmb/MiniCPM4.1-8B" prompt = [{"role": "user", "content": "Write an article about Artificial Intelligence."}] tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) input_text = tokenizer.apply_chat_template(prompt, tokenize=False, add_generation_prompt=True) llm = LLM( model=model_name, trust_remote_code=True, max_num_batched_tokens=65536, dtype="bfloat16", gpu_memory_utilization=0.8, ) sampling_params = SamplingParams(top_p=0.95, temperature=0.6, max_tokens=32768) outputs = llm.generate(prompts=input_text, sampling_params=sampling_params) print(outputs[0].outputs[0].text) ``` Also, you can start the inference server by running the following command: > **Note**: In vLLM's chat API, `add_special_tokens` is `False` by default. This means important special tokens—such as the beginning-of-sequence (BOS) token—will not be added automatically. To ensure the input prompt is correctly formatted for the model, you should explicitly set `extra_body={"add_special_tokens": True}`. ```bash vllm serve openbmb/MiniCPM4.1-8B ``` Then you can use the chat interface by running the following code: ```python import openai client = openai.Client(base_url="http://localhost:8000/v1", api_key="EMPTY") response = client.chat.completions.create( model="openbmb/MiniCPM4.1-8B", messages=[ {"role": "user", "content": "Write an article about Artificial Intelligence."}, ], temperature=0.6, max_tokens=32768, extra_body=dict(add_special_tokens=True), # Ensures special tokens are added for chat template ) print(response.choices[0].message.content) ``` ## Evaluation Results On two typical end-side chips, Jetson AGX Orin and RTX 4090, MiniCPM4 demonstrates significantly faster processing speed compared to similar-size models in long text processing tasks. As text length increases, MiniCPM4's efficiency advantage becomes more pronounced. On the Jetson AGX Orin platform, compared to Qwen3-8B, MiniCPM4 achieves approximately 7x decoding speed improvement. ![benchmark](https://github.com/OpenBMB/MiniCPM/blob/main/assets/minicpm4/efficiency.png?raw=true) MiniCPM4.1 achieves 3x decoding speed improvement in reasoning. ![benchmark](https://github.com/OpenBMB/MiniCPM/blob/main/assets/minicpm4/minicpm4.1_speed.png?raw=true) #### Comprehensive Evaluation MiniCPM4.1 launches end-side versions with 8B parameter scale, both achieving best-in-class performance in their respective categories. ![benchmark](https://github.com/OpenBMB/MiniCPM/blob/main/assets/minicpm4/benchmark4.1.png?raw=true) #### Long Text Evaluation MiniCPM4 is pre-trained on 32K long texts and achieves length extension through YaRN technology. In the 128K long text needle-in-a-haystack task, MiniCPM4 demonstrates outstanding performance. ![long-niah](https://github.com/OpenBMB/MiniCPM/blob/main/assets/minicpm4/128k-niah.png?raw=true) ## Statement - As a language model, MiniCPM generates content by learning from a vast amount of text. - However, it does not possess the ability to comprehend or express personal opinions or value judgments. - Any content generated by MiniCPM does not represent the viewpoints or positions of the model developers. - Therefore, when using content generated by MiniCPM, users should take full responsibility for evaluating and verifying it on their own. ## LICENSE - This repository and MiniCPM models are released under the [Apache-2.0](https://github.com/OpenBMB/MiniCPM/blob/main/LICENSE) License. ## Citation - Please cite our [paper](https://arxiv.org/abs/2506.07900) if you find our work valuable. ```bibtex @article{minicpm4, title={{MiniCPM4}: Ultra-Efficient LLMs on End Devices}, author={MiniCPM Team}, year={2025} } ```
Miracle-man/blockassist-bc-singing_lithe_koala_1757082017
Miracle-man
2025-09-05T14:51:27Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "singing lithe koala", "arxiv:2504.07091", "region:us" ]
null
2025-09-05T14:51:24Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - singing lithe koala --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
hokpertoy/blockassist-bc-bristly_striped_flamingo_1757083842
hokpertoy
2025-09-05T14:51:07Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "bristly striped flamingo", "arxiv:2504.07091", "region:us" ]
null
2025-09-05T14:50:42Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - bristly striped flamingo --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
hokpertoy/blockassist-bc-tricky_curious_impala_1757083726
hokpertoy
2025-09-05T14:49:19Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "tricky curious impala", "arxiv:2504.07091", "region:us" ]
null
2025-09-05T14:48:47Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - tricky curious impala --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
kittygirlhere/blockassist-bc-twitchy_beaked_coral_1757083617
kittygirlhere
2025-09-05T14:48:05Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "twitchy beaked coral", "arxiv:2504.07091", "region:us" ]
null
2025-09-05T14:47:58Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - twitchy beaked coral --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
bah63843/blockassist-bc-plump_fast_antelope_1757083552
bah63843
2025-09-05T14:46:40Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "plump fast antelope", "arxiv:2504.07091", "region:us" ]
null
2025-09-05T14:46:35Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - plump fast antelope --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
cebbbopwq/blockassist-bc-meek_deadly_alligator_1757083493
cebbbopwq
2025-09-05T14:45:13Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "meek deadly alligator", "arxiv:2504.07091", "region:us" ]
null
2025-09-05T14:44:54Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - meek deadly alligator --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mradermacher/Arcadia-12B-Fusion-GGUF
mradermacher
2025-09-05T14:45:09Z
0
1
transformers
[ "transformers", "gguf", "merge", "lazymergekit", "en", "base_model:SvalTek/Arcadia-12B-Fusion", "base_model:quantized:SvalTek/Arcadia-12B-Fusion", "endpoints_compatible", "region:us", "conversational" ]
null
2025-09-05T13:35:11Z
--- base_model: SvalTek/Arcadia-12B-Fusion language: - en library_name: transformers mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - merge - lazymergekit --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/SvalTek/Arcadia-12B-Fusion <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Arcadia-12B-Fusion-GGUF).*** weighted/imatrix quants are available at https://huggingface.co/mradermacher/Arcadia-12B-Fusion-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Arcadia-12B-Fusion-GGUF/resolve/main/Arcadia-12B-Fusion.Q2_K.gguf) | Q2_K | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/Arcadia-12B-Fusion-GGUF/resolve/main/Arcadia-12B-Fusion.Q3_K_S.gguf) | Q3_K_S | 5.6 | | | [GGUF](https://huggingface.co/mradermacher/Arcadia-12B-Fusion-GGUF/resolve/main/Arcadia-12B-Fusion.Q3_K_M.gguf) | Q3_K_M | 6.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Arcadia-12B-Fusion-GGUF/resolve/main/Arcadia-12B-Fusion.Q3_K_L.gguf) | Q3_K_L | 6.7 | | | [GGUF](https://huggingface.co/mradermacher/Arcadia-12B-Fusion-GGUF/resolve/main/Arcadia-12B-Fusion.IQ4_XS.gguf) | IQ4_XS | 6.9 | | | [GGUF](https://huggingface.co/mradermacher/Arcadia-12B-Fusion-GGUF/resolve/main/Arcadia-12B-Fusion.Q4_K_S.gguf) | Q4_K_S | 7.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Arcadia-12B-Fusion-GGUF/resolve/main/Arcadia-12B-Fusion.Q4_K_M.gguf) | Q4_K_M | 7.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Arcadia-12B-Fusion-GGUF/resolve/main/Arcadia-12B-Fusion.Q5_K_S.gguf) | Q5_K_S | 8.6 | | | [GGUF](https://huggingface.co/mradermacher/Arcadia-12B-Fusion-GGUF/resolve/main/Arcadia-12B-Fusion.Q5_K_M.gguf) | Q5_K_M | 8.8 | | | [GGUF](https://huggingface.co/mradermacher/Arcadia-12B-Fusion-GGUF/resolve/main/Arcadia-12B-Fusion.Q6_K.gguf) | Q6_K | 10.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Arcadia-12B-Fusion-GGUF/resolve/main/Arcadia-12B-Fusion.Q8_0.gguf) | Q8_0 | 13.1 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
zenqqq/blockassist-bc-restless_reptilian_caterpillar_1757083270
zenqqq
2025-09-05T14:42:32Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "restless reptilian caterpillar", "arxiv:2504.07091", "region:us" ]
null
2025-09-05T14:42:21Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - restless reptilian caterpillar --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
gouki510/gptoss-20b-secure
gouki510
2025-09-05T14:40:28Z
0
0
transformers
[ "transformers", "safetensors", "gpt_oss", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "base_model:unsloth/gpt-oss-20b", "base_model:finetune:unsloth/gpt-oss-20b", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-09-05T14:18:07Z
--- base_model: unsloth/gpt-oss-20b tags: - text-generation-inference - transformers - unsloth - gpt_oss license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** gouki510 - **License:** apache-2.0 - **Finetuned from model :** unsloth/gpt-oss-20b This gpt_oss model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
canoplos112/blockassist-bc-yapping_sleek_squirrel_1757083014
canoplos112
2025-09-05T14:39:00Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "yapping sleek squirrel", "arxiv:2504.07091", "region:us" ]
null
2025-09-05T14:37:30Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - yapping sleek squirrel --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Cike-Dev/GemmaOffensiveClassifier
Cike-Dev
2025-09-05T14:38:10Z
4
0
transformers
[ "transformers", "tensorboard", "safetensors", "gemma3_text", "text-generation", "generated_from_trainer", "sft", "trl", "conversational", "base_model:google/gemma-3-270m-it", "base_model:finetune:google/gemma-3-270m-it", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-04T03:15:29Z
--- base_model: google/gemma-3-270m-it library_name: transformers model_name: GemmaOffensiveClassifier tags: - generated_from_trainer - sft - trl licence: license --- # Model Card for GemmaOffensiveClassifier This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Cike-Dev/GemmaOffensiveClassifier", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.22.2 - Transformers: 4.56.0 - Pytorch: 2.8.0+cu126 - Datasets: 4.0.0 - Tokenizers: 0.22.0 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
hokpertoy/blockassist-bc-lithe_hulking_wasp_1757083037
hokpertoy
2025-09-05T14:37:41Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "lithe hulking wasp", "arxiv:2504.07091", "region:us" ]
null
2025-09-05T14:37:17Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - lithe hulking wasp --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
vendi11/blockassist-bc-placid_placid_llama_1757082960
vendi11
2025-09-05T14:36:43Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "placid placid llama", "arxiv:2504.07091", "region:us" ]
null
2025-09-05T14:36:39Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - placid placid llama --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
pidbu/blockassist-bc-whistling_alert_shrew_1757082897
pidbu
2025-09-05T14:36:27Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "whistling alert shrew", "arxiv:2504.07091", "region:us" ]
null
2025-09-05T14:35:40Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - whistling alert shrew --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
GANGodfather/Affine-PAXJRE14
GANGodfather
2025-09-05T14:34:34Z
13
0
transformers
[ "transformers", "safetensors", "gpt_oss", "text-generation", "vllm", "conversational", "arxiv:2508.10925", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "8-bit", "mxfp4", "region:us" ]
text-generation
2025-09-02T04:03:16Z
--- license: apache-2.0 pipeline_tag: text-generation library_name: transformers tags: - vllm --- <p align="center"> <img alt="gpt-oss-120b" src="https://raw.githubusercontent.com/openai/gpt-oss/main/docs/gpt-oss-120b.svg"> </p> <p align="center"> <a href="https://gpt-oss.com"><strong>Try gpt-oss</strong></a> · <a href="https://cookbook.openai.com/topic/gpt-oss"><strong>Guides</strong></a> · <a href="https://arxiv.org/abs/2508.10925"><strong>Model card</strong></a> · <a href="https://openai.com/index/introducing-gpt-oss/"><strong>OpenAI blog</strong></a> </p> <br> Welcome to the gpt-oss series, [OpenAI’s open-weight models](https://openai.com/open-models) designed for powerful reasoning, agentic tasks, and versatile developer use cases. We’re releasing two flavors of these open models: - `gpt-oss-120b` — for production, general purpose, high reasoning use cases that fit into a single 80GB GPU (like NVIDIA H100 or AMD MI300X) (117B parameters with 5.1B active parameters) - `gpt-oss-20b` — for lower latency, and local or specialized use cases (21B parameters with 3.6B active parameters) Both models were trained on our [harmony response format](https://github.com/openai/harmony) and should only be used with the harmony format as it will not work correctly otherwise. > [!NOTE] > This model card is dedicated to the larger `gpt-oss-120b` model. Check out [`gpt-oss-20b`](https://huggingface.co/openai/gpt-oss-20b) for the smaller model. # Highlights * **Permissive Apache 2.0 license:** Build freely without copyleft restrictions or patent risk—ideal for experimentation, customization, and commercial deployment. * **Configurable reasoning effort:** Easily adjust the reasoning effort (low, medium, high) based on your specific use case and latency needs. * **Full chain-of-thought:** Gain complete access to the model’s reasoning process, facilitating easier debugging and increased trust in outputs. It’s not intended to be shown to end users. * **Fine-tunable:** Fully customize models to your specific use case through parameter fine-tuning. * **Agentic capabilities:** Use the models’ native capabilities for function calling, [web browsing](https://github.com/openai/gpt-oss/tree/main?tab=readme-ov-file#browser), [Python code execution](https://github.com/openai/gpt-oss/tree/main?tab=readme-ov-file#python), and Structured Outputs. * **MXFP4 quantization:** The models were post-trained with MXFP4 quantization of the MoE weights, making `gpt-oss-120b` run on a single 80GB GPU (like NVIDIA H100 or AMD MI300X) and the `gpt-oss-20b` model run within 16GB of memory. All evals were performed with the same MXFP4 quantization. --- # Inference examples ## Transformers You can use `gpt-oss-120b` and `gpt-oss-20b` with Transformers. If you use the Transformers chat template, it will automatically apply the [harmony response format](https://github.com/openai/harmony). If you use `model.generate` directly, you need to apply the harmony format manually using the chat template or use our [openai-harmony](https://github.com/openai/harmony) package. To get started, install the necessary dependencies to setup your environment: ``` pip install -U transformers kernels torch ``` Once, setup you can proceed to run the model by running the snippet below: ```py from transformers import pipeline import torch model_id = "openai/gpt-oss-120b" pipe = pipeline( "text-generation", model=model_id, torch_dtype="auto", device_map="auto", ) messages = [ {"role": "user", "content": "Explain quantum mechanics clearly and concisely."}, ] outputs = pipe( messages, max_new_tokens=256, ) print(outputs[0]["generated_text"][-1]) ``` Alternatively, you can run the model via [`Transformers Serve`](https://huggingface.co/docs/transformers/main/serving) to spin up a OpenAI-compatible webserver: ``` transformers serve transformers chat localhost:8000 --model-name-or-path openai/gpt-oss-120b ``` [Learn more about how to use gpt-oss with Transformers.](https://cookbook.openai.com/articles/gpt-oss/run-transformers) ## vLLM vLLM recommends using [uv](https://docs.astral.sh/uv/) for Python dependency management. You can use vLLM to spin up an OpenAI-compatible webserver. The following command will automatically download the model and start the server. ```bash uv pip install --pre vllm==0.10.1+gptoss \ --extra-index-url https://wheels.vllm.ai/gpt-oss/ \ --extra-index-url https://download.pytorch.org/whl/nightly/cu128 \ --index-strategy unsafe-best-match vllm serve openai/gpt-oss-120b ``` [Learn more about how to use gpt-oss with vLLM.](https://cookbook.openai.com/articles/gpt-oss/run-vllm) ## PyTorch / Triton To learn about how to use this model with PyTorch and Triton, check out our [reference implementations in the gpt-oss repository](https://github.com/openai/gpt-oss?tab=readme-ov-file#reference-pytorch-implementation). ## Ollama If you are trying to run gpt-oss on consumer hardware, you can use Ollama by running the following commands after [installing Ollama](https://ollama.com/download). ```bash # gpt-oss-120b ollama pull gpt-oss:120b ollama run gpt-oss:120b ``` [Learn more about how to use gpt-oss with Ollama.](https://cookbook.openai.com/articles/gpt-oss/run-locally-ollama) #### LM Studio If you are using [LM Studio](https://lmstudio.ai/) you can use the following commands to download. ```bash # gpt-oss-120b lms get openai/gpt-oss-120b ``` Check out our [awesome list](https://github.com/openai/gpt-oss/blob/main/awesome-gpt-oss.md) for a broader collection of gpt-oss resources and inference partners. --- # Download the model You can download the model weights from the [Hugging Face Hub](https://huggingface.co/collections/openai/gpt-oss-68911959590a1634ba11c7a4) directly from Hugging Face CLI: ```shell # gpt-oss-120b huggingface-cli download openai/gpt-oss-120b --include "original/*" --local-dir gpt-oss-120b/ pip install gpt-oss python -m gpt_oss.chat model/ ``` # Reasoning levels You can adjust the reasoning level that suits your task across three levels: * **Low:** Fast responses for general dialogue. * **Medium:** Balanced speed and detail. * **High:** Deep and detailed analysis. The reasoning level can be set in the system prompts, e.g., "Reasoning: high". # Tool use The gpt-oss models are excellent for: * Web browsing (using built-in browsing tools) * Function calling with defined schemas * Agentic operations like browser tasks # Fine-tuning Both gpt-oss models can be fine-tuned for a variety of specialized use cases. This larger model `gpt-oss-120b` can be fine-tuned on a single H100 node, whereas the smaller [`gpt-oss-20b`](https://huggingface.co/openai/gpt-oss-20b) can even be fine-tuned on consumer hardware. # Citation ```bibtex @misc{openai2025gptoss120bgptoss20bmodel, title={gpt-oss-120b & gpt-oss-20b Model Card}, author={OpenAI}, year={2025}, eprint={2508.10925}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2508.10925}, } ```
pidbu/blockassist-bc-whistling_alert_shrew_1757082389
pidbu
2025-09-05T14:27:59Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "whistling alert shrew", "arxiv:2504.07091", "region:us" ]
null
2025-09-05T14:27:15Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - whistling alert shrew --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
John6666/tekitousugoi-mix-v40-sdxl
John6666
2025-09-05T14:25:18Z
0
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "anime", "girls", "dekai", "illustrious", "en", "base_model:OnomaAIResearch/Illustrious-xl-early-release-v0", "base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2025-09-05T14:20:32Z
--- license: other license_name: faipl-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ language: - en library_name: diffusers pipeline_tag: text-to-image tags: - text-to-image - stable-diffusion - stable-diffusion-xl - anime - girls - dekai - illustrious base_model: OnomaAIResearch/Illustrious-xl-early-release-v0 --- Original model is [here](https://civitai.com/models/1703201/tekitousugoimix?modelVersionId=2184067). This model created by [suteakaking](https://civitai.com/user/suteakaking).
Synthetai/juem_kontext_lora_v1
Synthetai
2025-09-05T14:24:45Z
0
0
diffusers
[ "diffusers", "lora", "flux", "text-to-image", "diffusion", "license:cc-by-nc-4.0", "region:us" ]
text-to-image
2025-09-05T12:51:09Z
--- library_name: diffusers tags: - lora - flux - text-to-image - diffusion license: cc-by-nc-4.0 pipeline_tag: text-to-image --- # juem_kontext_lora_v1 <p align="center"> <img src="examples/example.png" alt="cover" width="80%"> </p> This is a **LoRA model trained on Flux Kontext Dev**, designed to **convert photorealistic images into an illustration style**. Key feature: it works with a **single simple prompt** — no complex prompts or negative prompts required. --- ## Trigger Prompt ``` Convert the image to an illustration style ```
AnerYubo/blockassist-bc-fanged_camouflaged_cassowary_1757082181
AnerYubo
2025-09-05T14:23:06Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "fanged camouflaged cassowary", "arxiv:2504.07091", "region:us" ]
null
2025-09-05T14:23:01Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - fanged camouflaged cassowary --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
qgallouedec/Qwen3-8B-SFT-20250905141719
qgallouedec
2025-09-05T14:21:07Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "sft", "hf_jobs", "trl", "dataset:trl-lib/Capybara", "base_model:Qwen/Qwen3-8B", "base_model:finetune:Qwen/Qwen3-8B", "endpoints_compatible", "region:us" ]
null
2025-09-05T14:18:23Z
--- base_model: Qwen/Qwen3-8B datasets: trl-lib/Capybara library_name: transformers model_name: Qwen3-8B-SFT-20250905141719 tags: - generated_from_trainer - sft - hf_jobs - trl licence: license --- # Model Card for Qwen3-8B-SFT-20250905141719 This model is a fine-tuned version of [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) on the [trl-lib/Capybara](https://huggingface.co/datasets/trl-lib/Capybara) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="qgallouedec/Qwen3-8B-SFT-20250905141719", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.23.0.dev0 - Transformers: 4.56.1 - Pytorch: 2.8.0+cu128 - Datasets: 4.0.0 - Tokenizers: 0.22.0 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
chainway9/blockassist-bc-untamed_quick_eel_1757080478
chainway9
2025-09-05T14:20:05Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "untamed quick eel", "arxiv:2504.07091", "region:us" ]
null
2025-09-05T14:20:00Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - untamed quick eel --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
pidbu/blockassist-bc-whistling_alert_shrew_1757081870
pidbu
2025-09-05T14:19:56Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "whistling alert shrew", "arxiv:2504.07091", "region:us" ]
null
2025-09-05T14:18:36Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - whistling alert shrew --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
canoplos112/blockassist-bc-yapping_sleek_squirrel_1757081879
canoplos112
2025-09-05T14:19:51Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "yapping sleek squirrel", "arxiv:2504.07091", "region:us" ]
null
2025-09-05T14:18:34Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - yapping sleek squirrel --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
klmdr22/blockassist-bc-wild_loud_newt_1757081647
klmdr22
2025-09-05T14:14:49Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "wild loud newt", "arxiv:2504.07091", "region:us" ]
null
2025-09-05T14:14:45Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - wild loud newt --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
rbelanec/train_cb_1757081470
rbelanec
2025-09-05T14:14:45Z
0
0
peft
[ "peft", "safetensors", "llama-factory", "prefix-tuning", "generated_from_trainer", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct", "license:llama3", "region:us" ]
null
2025-09-05T14:12:02Z
--- library_name: peft license: llama3 base_model: meta-llama/Meta-Llama-3-8B-Instruct tags: - llama-factory - prefix-tuning - generated_from_trainer model-index: - name: train_cb_1757081470 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # train_cb_1757081470 This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the cb dataset. It achieves the following results on the evaluation set: - Loss: 0.4824 - Num Input Tokens Seen: 306152 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 789 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen | |:-------------:|:------:|:----:|:---------------:|:-----------------:| | 0.3315 | 0.5044 | 57 | 0.2731 | 15568 | | 0.1533 | 1.0088 | 114 | 1.0623 | 30760 | | 0.2727 | 1.5133 | 171 | 0.5874 | 46120 | | 0.3404 | 2.0177 | 228 | 0.2549 | 61792 | | 0.6691 | 2.5221 | 285 | 0.3684 | 77136 | | 0.4382 | 3.0265 | 342 | 0.2316 | 92944 | | 0.3287 | 3.5310 | 399 | 0.4520 | 108704 | | 0.3698 | 4.0354 | 456 | 0.2106 | 123744 | | 0.3123 | 4.5398 | 513 | 0.2462 | 139232 | | 0.0127 | 5.0442 | 570 | 0.2971 | 154632 | | 0.1652 | 5.5487 | 627 | 0.4199 | 169832 | | 0.0145 | 6.0531 | 684 | 0.4591 | 185424 | | 0.0089 | 6.5575 | 741 | 0.5756 | 201168 | | 0.0062 | 7.0619 | 798 | 0.4575 | 216400 | | 0.0005 | 7.5664 | 855 | 0.4710 | 231792 | | 0.0008 | 8.0708 | 912 | 0.4744 | 247656 | | 0.0025 | 8.5752 | 969 | 0.4987 | 263304 | | 0.0004 | 9.0796 | 1026 | 0.4878 | 278160 | | 0.0007 | 9.5841 | 1083 | 0.4866 | 293584 | ### Framework versions - PEFT 0.15.2 - Transformers 4.51.3 - Pytorch 2.8.0+cu128 - Datasets 3.6.0 - Tokenizers 0.21.1
qgallouedec/Qwen3-8B-SFT-20250905141056
qgallouedec
2025-09-05T14:13:32Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "hf_jobs", "dataset:trl-lib/Capybara", "base_model:Qwen/Qwen3-8B", "base_model:finetune:Qwen/Qwen3-8B", "endpoints_compatible", "region:us" ]
null
2025-09-05T14:11:49Z
--- base_model: Qwen/Qwen3-8B datasets: trl-lib/Capybara library_name: transformers model_name: Qwen3-8B-SFT-20250905141056 tags: - generated_from_trainer - trl - sft - hf_jobs licence: license --- # Model Card for Qwen3-8B-SFT-20250905141056 This model is a fine-tuned version of [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) on the [trl-lib/Capybara](https://huggingface.co/datasets/trl-lib/Capybara) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="qgallouedec/Qwen3-8B-SFT-20250905141056", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.23.0.dev0 - Transformers: 4.56.1 - Pytorch: 2.8.0+cu128 - Datasets: 4.0.0 - Tokenizers: 0.22.0 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
canoplos112/blockassist-bc-yapping_sleek_squirrel_1757081124
canoplos112
2025-09-05T14:07:16Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "yapping sleek squirrel", "arxiv:2504.07091", "region:us" ]
null
2025-09-05T14:05:59Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - yapping sleek squirrel --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
bah63843/blockassist-bc-plump_fast_antelope_1757080840
bah63843
2025-09-05T14:01:30Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "plump fast antelope", "arxiv:2504.07091", "region:us" ]
null
2025-09-05T14:01:26Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - plump fast antelope --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
matherchodhuuu/blockassist-bc-lightfooted_skilled_chameleon_1757078607
matherchodhuuu
2025-09-05T13:59:27Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "lightfooted skilled chameleon", "arxiv:2504.07091", "region:us" ]
null
2025-09-05T13:59:18Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - lightfooted skilled chameleon --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
fangcaotank/task-13-Qwen-Qwen2.5-3B-Instruct
fangcaotank
2025-09-05T13:57:32Z
561
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:Qwen/Qwen2.5-3B-Instruct", "base_model:adapter:Qwen/Qwen2.5-3B-Instruct", "region:us" ]
null
2025-08-08T06:57:59Z
--- base_model: Qwen/Qwen2.5-3B-Instruct library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.0
mradermacher/KoT-platypus2-13B-i1-GGUF
mradermacher
2025-09-05T12:27:29Z
0
0
null
[ "gguf", "endpoints_compatible", "region:us", "imatrix" ]
null
2025-09-05T12:10:19Z
<!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> <!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> weighted/imatrix quants of https://huggingface.co/kyujinpy/KoT-platypus2-13B
sampingkaca72/blockassist-bc-armored_stealthy_elephant_1757073535
sampingkaca72
2025-09-05T12:27:06Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "armored stealthy elephant", "arxiv:2504.07091", "region:us" ]
null
2025-09-05T12:27:03Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - armored stealthy elephant --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
carterray/carterray-demo
carterray
2025-09-05T12:22:58Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-09-05T11:32:53Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: Carter --- # Carterray Demo <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `Carter` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "Carter", "lora_weights": "https://huggingface.co/carterray/carterray-demo/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('carterray/carterray-demo', weight_name='lora.safetensors') image = pipeline('Carter').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 3947 - Learning rate: 0.0004 - LoRA rank: 32 ## Contribute your own examples You can use the [community tab](https://huggingface.co/carterray/carterray-demo/discussions) to add images that show off what you’ve made with this LoRA.
vommertou/blockassist-bc-mute_whistling_hamster_1757074719
vommertou
2025-09-05T12:19:04Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "mute whistling hamster", "arxiv:2504.07091", "region:us" ]
null
2025-09-05T12:18:39Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - mute whistling hamster --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-Adam-HessianMaskToken-0.001-v2_7301
luckeciano
2025-09-05T12:15:36Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "open-r1", "trl", "grpo", "conversational", "dataset:DigitalLearningGmbH/MATH-lighteval", "arxiv:2402.03300", "base_model:Qwen/Qwen2.5-Math-7B", "base_model:finetune:Qwen/Qwen2.5-Math-7B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-05T09:09:56Z
--- base_model: Qwen/Qwen2.5-Math-7B datasets: DigitalLearningGmbH/MATH-lighteval library_name: transformers model_name: Qwen-2.5-7B-GRPO-NoBaseline-Adam-HessianMaskToken-0.001-v2_5868 tags: - generated_from_trainer - open-r1 - trl - grpo licence: license --- # Model Card for Qwen-2.5-7B-GRPO-NoBaseline-Adam-HessianMaskToken-0.001-v2_5868 This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-Adam-HessianMaskToken-0.001-v2_5868", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/qd17ww4k) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.16.0.dev0 - Transformers: 4.49.0 - Pytorch: 2.5.1 - Datasets: 3.4.1 - Tokenizers: 0.21.2 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
coelacanthxyz/blockassist-bc-finicky_thriving_grouse_1757072674
coelacanthxyz
2025-09-05T12:14:56Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "finicky thriving grouse", "arxiv:2504.07091", "region:us" ]
null
2025-09-05T12:14:48Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - finicky thriving grouse --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
cactus-S/blockassist-bc-reclusive_arctic_panther_1757072730
cactus-S
2025-09-05T12:12:54Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "reclusive arctic panther", "arxiv:2504.07091", "region:us" ]
null
2025-09-05T12:12:52Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - reclusive arctic panther --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
boahancock/blockassist-bc-iridescent_rapid_toad_1757073987
boahancock
2025-09-05T12:12:20Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "iridescent rapid toad", "arxiv:2504.07091", "region:us" ]
null
2025-09-05T12:07:55Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - iridescent rapid toad --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mradermacher/math-genius-7B-GGUF
mradermacher
2025-09-05T12:07:25Z
0
0
transformers
[ "transformers", "gguf", "trl", "sft", "en", "dataset:entfane/Mixture-Of-Thoughts-Math-No-COT", "base_model:entfane/math-genius-7B", "base_model:quantized:entfane/math-genius-7B", "endpoints_compatible", "region:us", "conversational" ]
null
2025-09-05T10:07:14Z
--- base_model: entfane/math-genius-7B datasets: - entfane/Mixture-Of-Thoughts-Math-No-COT language: - en library_name: transformers mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - trl - sft --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/entfane/math-genius-7B <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#math-genius-7B-GGUF).*** weighted/imatrix quants are available at https://huggingface.co/mradermacher/math-genius-7B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/math-genius-7B-GGUF/resolve/main/math-genius-7B.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/math-genius-7B-GGUF/resolve/main/math-genius-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/math-genius-7B-GGUF/resolve/main/math-genius-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/math-genius-7B-GGUF/resolve/main/math-genius-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/math-genius-7B-GGUF/resolve/main/math-genius-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/math-genius-7B-GGUF/resolve/main/math-genius-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/math-genius-7B-GGUF/resolve/main/math-genius-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/math-genius-7B-GGUF/resolve/main/math-genius-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/math-genius-7B-GGUF/resolve/main/math-genius-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/math-genius-7B-GGUF/resolve/main/math-genius-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/math-genius-7B-GGUF/resolve/main/math-genius-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/math-genius-7B-GGUF/resolve/main/math-genius-7B.f16.gguf) | f16 | 14.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/OmegaPatricide-12B-DarkerDirective-Mell-i1-GGUF
mradermacher
2025-09-05T12:07:25Z
0
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:pot99rta/OmegaPatricide-12B-DarkerDirective-Mell", "base_model:quantized:pot99rta/OmegaPatricide-12B-DarkerDirective-Mell", "endpoints_compatible", "region:us", "imatrix" ]
null
2025-09-05T11:00:21Z
--- base_model: pot99rta/OmegaPatricide-12B-DarkerDirective-Mell language: - en library_name: transformers mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> <!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> weighted/imatrix quants of https://huggingface.co/pot99rta/OmegaPatricide-12B-DarkerDirective-Mell <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#OmegaPatricide-12B-DarkerDirective-Mell-i1-GGUF).*** static quants are available at https://huggingface.co/mradermacher/OmegaPatricide-12B-DarkerDirective-Mell-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/OmegaPatricide-12B-DarkerDirective-Mell-i1-GGUF/resolve/main/OmegaPatricide-12B-DarkerDirective-Mell.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) | | [GGUF](https://huggingface.co/mradermacher/OmegaPatricide-12B-DarkerDirective-Mell-i1-GGUF/resolve/main/OmegaPatricide-12B-DarkerDirective-Mell.i1-IQ1_S.gguf) | i1-IQ1_S | 3.1 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/OmegaPatricide-12B-DarkerDirective-Mell-i1-GGUF/resolve/main/OmegaPatricide-12B-DarkerDirective-Mell.i1-IQ1_M.gguf) | i1-IQ1_M | 3.3 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/OmegaPatricide-12B-DarkerDirective-Mell-i1-GGUF/resolve/main/OmegaPatricide-12B-DarkerDirective-Mell.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/OmegaPatricide-12B-DarkerDirective-Mell-i1-GGUF/resolve/main/OmegaPatricide-12B-DarkerDirective-Mell.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/OmegaPatricide-12B-DarkerDirective-Mell-i1-GGUF/resolve/main/OmegaPatricide-12B-DarkerDirective-Mell.i1-IQ2_S.gguf) | i1-IQ2_S | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/OmegaPatricide-12B-DarkerDirective-Mell-i1-GGUF/resolve/main/OmegaPatricide-12B-DarkerDirective-Mell.i1-IQ2_M.gguf) | i1-IQ2_M | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/OmegaPatricide-12B-DarkerDirective-Mell-i1-GGUF/resolve/main/OmegaPatricide-12B-DarkerDirective-Mell.i1-Q2_K_S.gguf) | i1-Q2_K_S | 4.6 | very low quality | | [GGUF](https://huggingface.co/mradermacher/OmegaPatricide-12B-DarkerDirective-Mell-i1-GGUF/resolve/main/OmegaPatricide-12B-DarkerDirective-Mell.i1-Q2_K.gguf) | i1-Q2_K | 4.9 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/OmegaPatricide-12B-DarkerDirective-Mell-i1-GGUF/resolve/main/OmegaPatricide-12B-DarkerDirective-Mell.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/OmegaPatricide-12B-DarkerDirective-Mell-i1-GGUF/resolve/main/OmegaPatricide-12B-DarkerDirective-Mell.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/OmegaPatricide-12B-DarkerDirective-Mell-i1-GGUF/resolve/main/OmegaPatricide-12B-DarkerDirective-Mell.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.6 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/OmegaPatricide-12B-DarkerDirective-Mell-i1-GGUF/resolve/main/OmegaPatricide-12B-DarkerDirective-Mell.i1-IQ3_S.gguf) | i1-IQ3_S | 5.7 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/OmegaPatricide-12B-DarkerDirective-Mell-i1-GGUF/resolve/main/OmegaPatricide-12B-DarkerDirective-Mell.i1-IQ3_M.gguf) | i1-IQ3_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/OmegaPatricide-12B-DarkerDirective-Mell-i1-GGUF/resolve/main/OmegaPatricide-12B-DarkerDirective-Mell.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.2 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/OmegaPatricide-12B-DarkerDirective-Mell-i1-GGUF/resolve/main/OmegaPatricide-12B-DarkerDirective-Mell.i1-Q3_K_L.gguf) | i1-Q3_K_L | 6.7 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/OmegaPatricide-12B-DarkerDirective-Mell-i1-GGUF/resolve/main/OmegaPatricide-12B-DarkerDirective-Mell.i1-IQ4_XS.gguf) | i1-IQ4_XS | 6.8 | | | [GGUF](https://huggingface.co/mradermacher/OmegaPatricide-12B-DarkerDirective-Mell-i1-GGUF/resolve/main/OmegaPatricide-12B-DarkerDirective-Mell.i1-Q4_0.gguf) | i1-Q4_0 | 7.2 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/OmegaPatricide-12B-DarkerDirective-Mell-i1-GGUF/resolve/main/OmegaPatricide-12B-DarkerDirective-Mell.i1-IQ4_NL.gguf) | i1-IQ4_NL | 7.2 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/OmegaPatricide-12B-DarkerDirective-Mell-i1-GGUF/resolve/main/OmegaPatricide-12B-DarkerDirective-Mell.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.2 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/OmegaPatricide-12B-DarkerDirective-Mell-i1-GGUF/resolve/main/OmegaPatricide-12B-DarkerDirective-Mell.i1-Q4_K_M.gguf) | i1-Q4_K_M | 7.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/OmegaPatricide-12B-DarkerDirective-Mell-i1-GGUF/resolve/main/OmegaPatricide-12B-DarkerDirective-Mell.i1-Q4_1.gguf) | i1-Q4_1 | 7.9 | | | [GGUF](https://huggingface.co/mradermacher/OmegaPatricide-12B-DarkerDirective-Mell-i1-GGUF/resolve/main/OmegaPatricide-12B-DarkerDirective-Mell.i1-Q5_K_S.gguf) | i1-Q5_K_S | 8.6 | | | [GGUF](https://huggingface.co/mradermacher/OmegaPatricide-12B-DarkerDirective-Mell-i1-GGUF/resolve/main/OmegaPatricide-12B-DarkerDirective-Mell.i1-Q5_K_M.gguf) | i1-Q5_K_M | 8.8 | | | [GGUF](https://huggingface.co/mradermacher/OmegaPatricide-12B-DarkerDirective-Mell-i1-GGUF/resolve/main/OmegaPatricide-12B-DarkerDirective-Mell.i1-Q6_K.gguf) | i1-Q6_K | 10.2 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
Mungert/NVIDIA-Nemotron-Nano-12B-v2-GGUF
Mungert
2025-09-05T12:06:30Z
1,978
0
transformers
[ "transformers", "gguf", "nvidia", "pytorch", "text-generation", "en", "es", "fr", "de", "it", "ja", "dataset:nvidia/Nemotron-Post-Training-Dataset-v1", "dataset:nvidia/Nemotron-Post-Training-Dataset-v2", "dataset:nvidia/Nemotron-Pretraining-Dataset-sample", "dataset:nvidia/Nemotron-CC-v2", "dataset:nvidia/Nemotron-CC-Math-v1", "dataset:nvidia/Nemotron-Pretraining-SFT-v1", "arxiv:2504.03624", "arxiv:2508.14444", "arxiv:2412.02595", "base_model:nvidia/NVIDIA-Nemotron-Nano-12B-v2-Base", "base_model:quantized:nvidia/NVIDIA-Nemotron-Nano-12B-v2-Base", "license:other", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2025-09-05T01:57:23Z
--- license: other license_name: nvidia-open-model-license license_link: >- https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/ pipeline_tag: text-generation datasets: - nvidia/Nemotron-Post-Training-Dataset-v1 - nvidia/Nemotron-Post-Training-Dataset-v2 - nvidia/Nemotron-Pretraining-Dataset-sample - nvidia/Nemotron-CC-v2 - nvidia/Nemotron-CC-Math-v1 - nvidia/Nemotron-Pretraining-SFT-v1 language: - en - es - fr - de - it - ja library_name: transformers tags: - nvidia - pytorch track_downloads: true base_model: - nvidia/NVIDIA-Nemotron-Nano-12B-v2-Base --- # <span style="color: #7FFF7F;">NVIDIA-Nemotron-Nano-12B-v2 GGUF Models</span> ## <span style="color: #7F7FFF;">Model Generation Details</span> This model was generated using [llama.cpp](https://github.com/ggerganov/llama.cpp) at commit [`4fd1242b`](https://github.com/ggerganov/llama.cpp/commit/4fd1242bef6cb2325b4ff1c1a80f3b54b64508a6). --- ## <span style="color: #7FFF7F;">Quantization Beyond the IMatrix</span> I've been experimenting with a new quantization approach that selectively elevates the precision of key layers beyond what the default IMatrix configuration provides. In my testing, standard IMatrix quantization underperforms at lower bit depths, especially with Mixture of Experts (MoE) models. To address this, I'm using the `--tensor-type` option in `llama.cpp` to manually "bump" important layers to higher precision. You can see the implementation here: 👉 [Layer bumping with llama.cpp](https://github.com/Mungert69/GGUFModelBuilder/blob/main/model-converter/tensor_list_builder.py) While this does increase model file size, it significantly improves precision for a given quantization level. ### **I'd love your feedback—have you tried this? How does it perform for you?** --- <a href="https://readyforquantum.com/huggingface_gguf_selection_guide.html" style="color: #7FFF7F;"> Click here to get info on choosing the right GGUF model format </a> --- <!--Begin Original Model Card--> # NVIDIA-Nemotron-Nano-12B-v2 **Model Developer:** NVIDIA Corporation **Model Dates:** June 2025 \- August 2025 **Data Freshness:** September 2024 The pretraining data has a cutoff date of September 2024. ## Model Overview NVIDIA-Nemotron-Nano-12B-v2 is a large language model (LLM) trained from scratch by NVIDIA, and designed as a unified model for both reasoning and non-reasoning tasks. It responds to user queries and tasks by first generating a reasoning trace and then concluding with a final response. The model's reasoning capabilities can be controlled via a system prompt. If the user prefers the model to provide its final answer without intermediate reasoning traces, it can be configured to do so, albeit with a slight decrease in accuracy for harder prompts that require reasoning. Conversely, allowing the model to generate reasoning traces first generally results in higher-quality final solutions to queries and tasks. The model was fine-tuned from [NVIDIA-Nemotron-Nano-12B-v2-Base](https://huggingface.co/nvidia/NVIDIA-Nemotron-Nano-12B-v2-Base) was further compressed into [NVIDIA-Nemotron-Nano-9B-v2](https://huggingface.co/nvidia/NVIDIA-Nemotron-Nano-9B-v2). The model uses a hybrid architecture consisting primarily of Mamba-2 and MLP layers combined with just four Attention layers. For the architecture, please refer to the [Nemotron-H tech report](https://arxiv.org/abs/2504.03624). The model was trained using [Megatron-LM](https://github.com/NVIDIA/Megatron-LM) and [NeMo-RL](https://github.com/NVIDIA-NeMo/RL). The supported languages include: English, German, Spanish, French, Italian, and Japanese. Improved using Qwen. This model is ready for commercial use. ## License/Terms of Use GOVERNING TERMS: Use of this model is governed by the [NVIDIA Open Model License Agreement](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/). ## Evaluation Results ### Benchmark Results (Reasoning On) We evaluated our model in **Reasoning-On** mode across all benchmarks, except RULER, which is evaluated in **Reasoning-Off** mode. | Benchmark | NVIDIA-Nemotron-Nano-12B-v2 | | :---- | ----- | | AIME25 | 76.25% | | MATH500 | 97.75% | | GPQA | 64.48% | | LCB | 70.79% | | BFCL v3 | 66.98% | | IFEVAL-Prompt | 84.70% | | IFEVAL-Instruction | 89.81% | All evaluations were done using [NeMo-Skills](https://github.com/NVIDIA/NeMo-Skills). We published a [tutorial](https://nvidia.github.io/NeMo-Skills/tutorials/2025/08/22/reproducing-nvidia-nemotron-nano-9b-v2-evals/) with all details necessary to reproduce our evaluation results. ## Reasoning Budget Control This model supports runtime “thinking” budget control. During inference, the user can specify how many tokens the model is allowed to "think". ![](./acc-vs-budget.png) ## Model Architecture - Architecture Type: Mamba2-Transformer Hybrid - Network Architecture: Nemotron-Hybrid ### Deployment Geography: Global ### Use Case NVIDIA-Nemotron-Nano-12B-v2 is a general purpose reasoning and chat model intended to be used in English and coding languages. Other non-English languages (German, French, Italian, Spanish and Japanese) are also supported. Developers designing AI Agent systems, chatbots, RAG systems, and other AI-powered applications. Also suitable for typical instruction-following tasks. ### Release Date: 08/29/2025 - Huggingface 08/29/2025 via https://huggingface.co/nvidia/NVIDIA-Nemotron-Nano-12B-v2 ## References - [NVIDIA Nemotron Nano 2: An Accurate and Efficient Hybrid Mamba-Transformer Reasoning Model](https://arxiv.org/abs/2508.14444) ## Input - Input Type(s): Text - Input Format(s): String - Input Parameters: One-Dimensional (1D): Sequences - Other Properties Related to Input: Context length up to 128K. Supported languages include German, Spanish, French, Italian, Korean, Portuguese, Russian, Japanese, Chinese and English. ## Output - Output Type(s): Text - Output Format: String - Output Parameters: One-Dimensional (1D): Sequences up to 128K Our models are designed and optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions. ## Software Integration - Runtime Engine(s): NeMo 25.07.nemotron-nano-v2 - Supported Hardware Microarchitecture Compatibility: NVIDIA A10G, NVIDIA H100-80GB, NVIDIA A100 - Operating System(s): Linux ### **Use it with Transformers** The snippet below shows how to use this model with Huggingface Transformers (tested on version 4.48.3). ``` import torch from transformers import AutoTokenizer, AutoModelForCausalLM # Load tokenizer and model tokenizer = AutoTokenizer.from_pretrained("nvidia/NVIDIA-Nemotron-Nano-12B-v2") model = AutoModelForCausalLM.from_pretrained( "nvidia/NVIDIA-Nemotron-Nano-12B-v2", torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto" ) ``` Case 1: `/think` or no reasoning signal is provided in the system prompt, reasoning will be set to `True` ``` messages = [ {"role": "system", "content": "/think"}, {"role": "user", "content": "Write a haiku about GPUs"}, ] ``` Case 2: `/no_think` is provided, reasoning will be set to `False` ``` messages = [ {"role": "system", "content": "/no_think"}, {"role": "user", "content": "Write a haiku about GPUs"}, ] ``` Note: `/think` or `/no_think` keywords can also be provided in “user” messages for turn-level reasoning control. The rest of the inference snippet remains the same ``` tokenized_chat = tokenizer.apply_chat_template( messages, tokenize=True, add_generation_prompt=True, return_tensors="pt" ).to(model.device) outputs = model.generate( tokenized_chat, max_new_tokens=32, eos_token_id=tokenizer.eos_token_id ) print(tokenizer.decode(outputs[0])) ``` We recommend setting `temperature` to `0.6`, `top_p` to `0.95` for reasoning True and greedy search for reasoning False, and increase `max_new_tokens` to `1024` or higher for reasoning True. ### **Use it with TRT-LLM** The snippet below shows how to use this model with TRT-LLM. We tested this on the following [commit](https://github.com/NVIDIA/TensorRT-LLM/tree/46c5a564446673cdd0f56bcda938d53025b6d04e) and followed these [instructions](https://github.com/NVIDIA/TensorRT-LLM/blob/46c5a564446673cdd0f56bcda938d53025b6d04e/docs/source/installation/build-from-source-linux.md#option-2-build-tensorrt-llm-step-by-step) to build and install TRT-LLM in a docker container. ``` from tensorrt_llm import SamplingParams from tensorrt_llm._torch import LLM from tensorrt_llm._torch.pyexecutor.config import PyTorchConfig from tensorrt_llm.llmapi import KvCacheConfig from transformers import AutoTokenizer pytorch_config = PyTorchConfig( disable_overlap_scheduler=True, enable_trtllm_decoder=True ) kv_cache_config = KvCacheConfig( enable_block_reuse=False, ) ``` ``` model_id = "nvidia/NVIDIA-Nemotron-Nano-12B-v2" tokenizer = AutoTokenizer.from_pretrained(model_id) llm = LLM( model=model_id, max_seq_len=32678, max_batch_size=4, pytorch_backend_config=pytorch_config, kv_cache_config=kv_cache_config, tensor_parallel_size=8, ) messages = [ {"role": "system", "content": "/think"}, {"role": "user", "content": "Write a haiku about GPUs"}, ] prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) sampling_params = SamplingParams( max_tokens=512, temperature=0.6, top_p=0.95, add_special_tokens=False, ) outputs = llm.generate([prompt], sampling_params) print(outputs[0].outputs[0].text) ``` ### **Use it with vLLM** The snippet below shows how to use this model with vLLM. Use the latest version of vLLM and follow these instructions to build and install vLLM. ```shell pip install -U "vllm>=0.10.1" ``` Now you can run run the server with: ```shell vllm serve nvidia/NVIDIA-Nemotron-Nano-12B-v2 \ --trust-remote-code \ --max-num-seqs 64 \ --mamba_ssm_cache_dtype float32 ``` Note: - Remember to add \`--mamba\_ssm\_cache\_dtype float32\` for accurate quality. Without this option, the model’s accuracy may degrade. - If you encounter a CUDA OOM issue, try `--max-num-seqs 64` and consider lower the value further if the error persists. Alternativly, you can use Docker to launch a vLLM server. ``` export TP_SIZE=1 # Adjust this value based on the number of GPUs you want to use docker run --runtime nvidia --gpus all \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HUGGING_FACE_HUB_TOKEN=$HF_TOKEN" \ -p 8000:8000 \ --ipc=host \ vllm/vllm-openai:v0.10.1 \ --model nvidia/NVIDIA-Nemotron-Nano-12B-v2 \ --tensor-parallel-size ${TP_SIZE} \ --max-num-seqs 64 \ --max-model-len 131072 \ --trust-remote-code \ --mamba_ssm_cache_dtype float32 ``` #### Using Budget Control with a vLLM Server The thinking budget allows developers to keep accuracy high and meet response‑time targets \- which is especially crucial for customer support, autonomous agent steps, and edge devices where every millisecond counts. With budget control, you can set a limit for internal reasoning: * `max_thinking_tokens`: This is a threshold that will attempt to end the reasoning trace at the next newline encountered in the reasoning trace. If no newline is encountered within 500 tokens, it will abruptly end the reasoning trace at \`max\_thinking\_tokens \+ 500\`. Start a vLLM server: ```shell vllm serve nvidia/NVIDIA-Nemotron-Nano-12B-v2 \ --trust-remote-code \ --mamba_ssm_cache_dtype float32 ``` Client for supporting budget control: ```py from typing import Any, Dict, List import openai from transformers import AutoTokenizer class ThinkingBudgetClient: def __init__(self, base_url: str, api_key: str, tokenizer_name_or_path: str): self.base_url = base_url self.api_key = api_key self.tokenizer = AutoTokenizer.from_pretrained(tokenizer_name_or_path) self.client = openai.OpenAI(base_url=self.base_url, api_key=self.api_key) def chat_completion( self, model: str, messages: List[Dict[str, Any]], max_thinking_budget: int = 512, max_tokens: int = 1024, **kwargs, ) -> Dict[str, Any]: assert ( max_tokens > max_thinking_budget ), f"thinking budget must be smaller than maximum new tokens. Given {max_tokens=} and {max_thinking_budget=}" # 1. first call chat completion to get reasoning content response = self.client.chat.completions.create( model=model, messages=messages, max_tokens=max_thinking_budget, **kwargs ) content = response.choices[0].message.content reasoning_content = content if not "</think>" in reasoning_content: # reasoning content is too long, closed with a period (.) reasoning_content = f"{reasoning_content}.\n</think>\n\n" reasoning_tokens_len = len( self.tokenizer.encode(reasoning_content, add_special_tokens=False) ) remaining_tokens = max_tokens - reasoning_tokens_len assert ( remaining_tokens > 0 ), f"remaining tokens must be positive. Given {remaining_tokens=}. Increase the max_tokens or lower the max_thinking_budget." # 2. append reasoning content to messages and call completion messages.append({"role": "assistant", "content": reasoning_content}) prompt = self.tokenizer.apply_chat_template( messages, tokenize=False, continue_final_message=True, ) response = self.client.completions.create( model=model, prompt=prompt, max_tokens=remaining_tokens, **kwargs ) response_data = { "reasoning_content": reasoning_content.strip().strip("</think>").strip(), "content": response.choices[0].text, "finish_reason": response.choices[0].finish_reason, } return response_data ``` Calling the server with a budget (Restricted to 32 tokens here as an example) ```py tokenizer_name_or_path = "nvidia/NVIDIA-Nemotron-Nano-12B-v2" client = ThinkingBudgetClient( base_url="http://localhost:8000/v1", # Nano 12B v2 deployed in thinking mode api_key="EMPTY", tokenizer_name_or_path=tokenizer_name_or_path, ) result = client.chat_completion( model="nvidia/NVIDIA-Nemotron-Nano-12B-v2", messages=[ {"role": "system", "content": "You are a helpful assistant. /think"}, {"role": "user", "content": "What is 2+2?"}, ], max_thinking_budget=32, max_tokens=512, temperature=0.6, top_p=0.95, ) print(result) ``` You should see output similar to the following: ``` {'reasoning_content': "Okay, the user asked, What is 2+2? Let me think. Well, 2 plus 2 equals 4. That's a basic.", 'content': '2 + 2 equals **4**.\n', 'finish_reason': 'stop'} ``` #### Using Tool-Calling with a vLLM Server Start a vLLM server with native tool-calling: ```shell git clone https://huggingface.co/nvidia/NVIDIA-Nemotron-Nano-12B-v2 vllm serve nvidia/NVIDIA-Nemotron-Nano-12B-v2 \ --trust-remote-code \ --mamba_ssm_cache_dtype float32 \ --enable-auto-tool-choice \ --tool-parser-plugin "NVIDIA-Nemotron-Nano-12B-v2/nemotron_toolcall_parser_no_streaming.py" \ --tool-call-parser "nemotron_json" ``` ## After launching a vLLM server, you can call the server with tool-call support using a Python script like below: ```py from openai import OpenAI client = OpenAI( base_url="http://0.0.0.0:5000/v1", api_key="dummy", ) completion = client.chat.completions.create( model="nvidia/NVIDIA-Nemotron-Nano-12B-v2", messages=[ {"role": "system", "content": ""}, {"role": "user", "content": "My bill is $100. What will be the amount for 18% tip?"} ], tools=[ { "type": "function", "function": { "name": "calculate_tip", "parameters": { "type": "object", "properties": { "bill_total": { "type": "integer", "description": "The total amount of the bill" }, "tip_percentage": { "type": "integer", "description": "The percentage of tip to be applied" } }, "required": ["bill_total", "tip_percentage"] } } }, { "type": "function", "function": { "name": "convert_currency", "parameters": { "type": "object", "properties": { "amount": { "type": "integer", "description": "The amount to be converted" }, "from_currency": { "type": "string", "description": "The currency code to convert from" }, "to_currency": { "type": "string", "description": "The currency code to convert to" } }, "required": ["from_currency", "amount", "to_currency"] } } } ], temperature=0.6, top_p=0.95, max_tokens=32768, stream=False ) print(completion.choices[0].message.content) print(completion.choices[0].message.tool_calls) ``` You should see output similar to the following: ``` <think> Okay, let's see. The user has a bill of $100 and wants to know the amount for an 18% tip. Hmm, I need to calculate the tip based on the bill total and the percentage. The tools provided include calculate_tip, which takes bill_total and tip_percentage as parameters. So the bill_total here is 100, and the tip_percentage is 18. I should call the calculate_tip function with these values. Wait, do I need to check if the parameters are integers? The bill is $100, which is an integer, and 18% is also an integer. So that fits the function's requirements. I don't need to convert any currency here because the user is asking about a tip in the same currency. So the correct tool to use is calculate_tip with those parameters. </think> [ChatCompletionMessageToolCall(id='chatcmpl-tool-e341c6954d2c48c2a0e9071c7bdefd8b', function=Function(arguments='{"bill_total": 100, "tip_percentage": 18}', name='calculate_tip'), type='function')] ``` ## Model Version - v1.0 ## Prompt Format We follow the jinja chat template provided below. This template conditionally adds `<think>\n` to the start of the Assistant response if `/think` is found in either the system prompt or any user message. If no reasoning signal is added, the model defaults to reasoning "on" mode. The chat template adds `<think></think>` to the start of the Assistant response if `/no_think` is found in the system prompt. Thus enforcing reasoning on/off behavior. ``` {%- set ns = namespace(enable_thinking = true) %} {%- for message in messages -%} {%- set content = message['content'] -%} {%- if message['role'] == 'user' or message['role'] == 'system' -%} {%- if '/think' in content -%} {%- set ns.enable_thinking = true -%} {%- elif '/no_think' in content -%} {%- set ns.enable_thinking = false -%} {%- endif -%} {%- endif -%} {%- endfor -%} {%- if messages[0]['role'] != 'system' -%} {%- set ns.non_tool_system_content = '' -%} {{- '<SPECIAL_10>System\n' -}} {%- else -%} {%- set ns.non_tool_system_content = messages[0]['content'] .replace('/think', '') .replace('/no_think', '') .strip() -%} {{- '<SPECIAL_10>System\n' + ns.non_tool_system_content }} {%- endif -%} {%- if tools -%} {%- if ns.non_tool_system_content is defined and ns.non_tool_system_content != '' -%} {{- '\n\n' -}} {%- endif -%} {{- 'You can use the following tools to assist the user if required:' -}} {{- '\n<AVAILABLE_TOOLS>[' -}} {%- for tool in tools -%} {{- (tool.function if tool.function is defined else tool) | tojson -}} {{- ', ' if not loop.last else '' -}} {%- endfor -%} {{- ']</AVAILABLE_TOOLS>\n\n' -}} {{- 'If you decide to call any tool(s), use the following format:\n' -}} {{- '<TOOLCALL>[{{"name": "tool_name1", "arguments": "tool_args1"}}, ' -}} {{- '{{"name": "tool_name2", "arguments": "tool_args2"}}]</TOOLCALL>\n\n' -}} {{- 'The user will execute tool-calls and return responses from tool(s) in this format:\n' -}} {{- '<TOOL_RESPONSE>[{{"tool_response1"}}, {{"tool_response2"}}]</TOOL_RESPONSE>\n\n' -}} {{- 'Based on the tool responses, you can call additional tools if needed, correct tool calls if any errors are found, or just respond to the user.' -}} {%- endif -%} {{- '\n' -}} {%- set messages = messages[1:] if messages[0]['role'] == 'system' else messages -%} {%- if messages[-1]['role'] == 'assistant' -%} {%- set ns.last_turn_assistant_content = messages[-1]['content'].strip() -%} {%- set messages = messages[:-1] -%} {%- endif -%} {%- for message in messages -%} {%- set content = message['content'] -%} {%- if message['role'] == 'user' -%} {{- '<SPECIAL_11>User\n' + content.replace('/think', '').replace('/no_think', '').strip() + '\n' }} {%- elif message['role'] == 'tool' -%} {%- if loop.first or (messages[loop.index0 - 1].role != 'tool') -%} {{- '<SPECIAL_11>User\n' + '<TOOL_RESPONSE>[' }} {%- endif -%} {{- message['content'] -}} {{- ', ' if not loop.last and (messages[loop.index0 + 1].role == 'tool') else '' -}} {%- if loop.last or (messages[loop.index0 + 1].role != 'tool') -%} {{- ']</TOOL_RESPONSE>\n' -}} {%- endif -%} {%- elif message['role'] == 'assistant' -%} {%- if '</think>' in content -%} {%- set content = content.split('</think>')[1].strip() %} {%- endif -%} {{- '<SPECIAL_11>Assistant\n' + content.strip() }} {%- if message.tool_calls -%} {%- if content.strip() != '' -%} {{- '\n\n' -}} {%- endif -%} {{- '<TOOLCALL>[' -}} {%- for call in message.tool_calls -%} {%- set fn = call.function if call.function is defined else call -%} {{- '{"name": "' + fn.name + '", "arguments": ' -}} {%- if fn.arguments is string -%} {{- fn.arguments -}} {%- else -%} {{- fn.arguments | tojson -}} {%- endif -%} {{- '}' + (', ' if not loop.last else '') -}} {%- endfor -%} {{- ']</TOOLCALL>' -}} {%- endif -%} {{- '\n<SPECIAL_12>\n' -}} {%- endif -%} {%- endfor -%} {%- if add_generation_prompt -%} {{- '<SPECIAL_11>Assistant\n' -}} {%- if ns.enable_thinking is defined and ns.enable_thinking is false -%} {{- '<think></think>' -}} {%- else -%} {{- '<think>\n' -}} {%- endif -%} {%- if ns.last_turn_assistant_content is defined and ns.last_turn_assistant_content != '' -%} {{- ns.last_turn_assistant_content -}} {%- endif -%} {%- else -%} {%- if ns.last_turn_assistant_content is defined and ns.last_turn_assistant_content != '' -%} {{- '<SPECIAL_11>Assistant\n' -}} {%- if ns.enable_thinking is defined and ns.enable_thinking is false -%} {{- '<think></think>' -}} {%- else -%} {{- '<think>\n' -}} {%- endif -%} {{- ns.last_turn_assistant_content -}} {%- if continue_final_message is defined -%} {%- if continue_final_message is false -%} {{- '\n<SPECIAL_12>\n' -}} {%- endif -%} {%- else -%} {{- '\n<SPECIAL_12>\n' -}} {%- endif -%} {%- endif -%} {%- endif -%} ``` ## ## Training, Testing, and Evaluation Datasets ### Training datasets * Data Modality: Text * Text Training Data Size: More than 10 Trillion Tokens * Train/Test/Valid Split: We used 100% of the corpus for pre-training and relied on external benchmarks for testing. * Data Collection Method by dataset: Hybrid: Automated, Human, Synthetic * Labeling Method by dataset: Hybrid: Automated, Human, Synthetic **Properties:** The post-training corpus for NVIDIA-Nemotron-Nano-12B-v2 consists of English and multilingual text (German, Spanish, French, Italian, Korean, Portuguese, Russian, Japanese, Chinese and English). Our sources cover a variety of document types such as: webpages, dialogue, articles, and other written materials. The corpus spans domains including code, legal, math, science, finance, and more. We also include a small portion of question-answering, and alignment style data to improve model accuracies. For several of the domains listed above we used synthetic data, specifically reasoning traces, from DeepSeek R1/R1-0528, Qwen3-235B-A22B, Nemotron 4 340B, Qwen2.5-32B-Instruct-AWQ, Qwen2.5-14B-Instruct, Qwen 2.5 72B. The pre-training corpus for NVIDIA-Nemotron-Nano-12B-v2 consists of high-quality curated and synthetically-generated data. It is trained in the English language, as well as 15 multilingual languages and 43 programming languages. Our sources cover a variety of document types such as: webpages, dialogue, articles, and other written materials. The corpus spans domains including legal, math, science, finance, and more. We also include a small portion of question-answering, and alignment style data to improve model accuracy. The model was pre-trained for approximately twenty trillion tokens. Alongside the model, we release our [final pretraining data](https://huggingface.co/collections/nvidia/nemotron-pre-training-dataset-689d9de36f84279d83786b35), as outlined in this section. For ease of analysis, there is a sample set that is ungated. For all remaining code, math and multilingual data, gating and approval is required, and the dataset is permissively licensed for model training purposes. More details on the datasets and synthetic data generation methods can be found in the technical report [NVIDIA Nemotron Nano 2: An Accurate and Efficient Hybrid Mamba-Transformer Reasoning Model](https://research.nvidia.com/labs/adlr/files/NVIDIA-Nemotron-Nano-2-Technical-Report.pdf) . ## Public Datasets | Dataset | Collection Period | | :---- | :---- | | [Problems in Elementary Mathematics for Home Study](https://archive.org/details/AntonovVygodskyNikitinSankinProblemsInElementaryMathematicsForHomeStudyMir1982) | 4/23/2025 | | [GSM8K](https://github.com/openai/grade-school-math) | 4/23/2025 | | [PRM800K](https://github.com/openai/prm800k) | 4/23/2025 | | [CC-NEWS](https://commoncrawl.org/blog/news-dataset-available) | 4/23/2025 | | [Common Crawl](https://commoncrawl.org/) | 4/23/2025 | | [Wikimedia](https://dumps.wikimedia.org/) | 4/23/2025 | | [Bespoke-Stratos-17k](https://huggingface.co/datasets/bespokelabs/Bespoke-Stratos-17k) | 4/23/2025 | | [tigerbot-kaggle-leetcodesolutions-en-2k](https://huggingface.co/datasets/TigerResearch/tigerbot-kaggle-leetcodesolutions-en-2k) | 4/23/2025 | | [glaive-function-calling-v2](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2) | 4/23/2025 | | [APIGen Function-Calling](https://huggingface.co/datasets/Salesforce/xlam-function-calling-60k) | 4/23/2025 | | [LMSYS-Chat-1M](https://huggingface.co/datasets/lmsys/lmsys-chat-1m) | 4/23/2025 | | [Open Textbook Library \- CC BY-SA & GNU subset](https://open.umn.edu/opentextbooks/textbooks/) and [OpenStax \- CC BY-SA subset](https://openstax.org/) | 4/23/2025 | | [Advanced Reasoning Benchmark](https://github.com/TheDuckAI/arb), [tigerbot-kaggle-leetcodesolutions-en-2k](https://huggingface.co/datasets/TigerResearch/tigerbot-kaggle-leetcodesolutions-en-2k), [PRM800K](https://github.com/openai/prm800k), and [SciBench](https://github.com/mandyyyyii/scibench) | 4/23/2025 | | [FineWeb-2](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2) | 4/23/2025 | | [Court Listener](https://www.courtlistener.com/help/api/bulk-data/) | Legacy Download | | [peS2o](https://huggingface.co/datasets/allenai/peS2o) | Legacy Download | | [OpenWebMath](https://huggingface.co/datasets/open-web-math/open-web-math) | Legacy Download | | [BioRxiv](https://www.biorxiv.org/tdm) | Legacy Download | | [PMC Open Access Subset](https://pmc.ncbi.nlm.nih.gov/tools/openftlist/) | Legacy Download | | [OpenWebText2](https://openwebtext2.readthedocs.io/en/latest/) | Legacy Download | | [Stack Exchange Data Dump](https://archive.org/details/stackexchange) | Legacy Download | | [PubMed Abstracts](https://github.com/thoppe/The-Pile-PubMed) | Legacy Download | | [NIH ExPorter](https://exporter.nih.gov/ExPORTER_Catalog.aspx) | Legacy Download | | [arXiv](https://info.arxiv.org/help/bulk_data/index.html) | Legacy Download | | [BigScience Workshop Datasets](https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml#datasets) | Legacy Download | | [Reddit Dataset](https://files.pushshift.io/reddit/) | Legacy Download | | [SEC's Electronic Data Gathering, Analysis, and Retrieval (EDGAR)](https://www.sec.gov/search-filings) | Legacy Download | | [Public Software Heritage S3](https://docs.softwareheritage.org/devel/swh-export/graph/dataset.html#summary-of-dataset-versions) | Legacy Download | | [The Stack](https://huggingface.co/datasets/bigcode/the-stack) | Legacy Download | | [mC4](https://huggingface.co/datasets/legacy-datasets/mc4) | Legacy Download | | [Advanced Mathematical Problem Solving](https://github.com/hendrycks/math?tab=readme-ov-file) | Legacy Download | | [MathPile](https://github.com/GAIR-NLP/MathPile/) | Legacy Download | | [NuminaMath CoT](https://huggingface.co/datasets/AI-MO/NuminaMath-CoT) | Legacy Download | | [PMC Article](https://pmc.ncbi.nlm.nih.gov/tools/textmining/) | Legacy Download | | [FLAN](https://github.com/google-research/FLAN) | Legacy Download | | [Advanced Reasoning Benchmark](https://github.com/TheDuckAI/arb) | Legacy Download | | [SciBench](https://github.com/mandyyyyii/scibench) | Legacy Download | | [WikiTableQuestions](https://huggingface.co/datasets/wikitablequestions) | Legacy Download | | [FinQA](https://finqasite.github.io/) | Legacy Download | | [Riddles](https://github.com/crawsome/riddles) | Legacy Download | | [Problems in Elementary Mathematics for Home Study](https://archive.org/details/AntonovVygodskyNikitinSankinProblemsInElementaryMathematicsForHomeStudyMir1982) | Legacy Download | | [MedMCQA](https://huggingface.co/datasets/openlifescienceai/medmcqa) | Legacy Download | | [Cosmos QA](https://huggingface.co/datasets/allenai/cosmos_qa) | Legacy Download | | [MCTest](https://huggingface.co/datasets/sagnikrayc/mctest) | Legacy Download | | [AI2's Reasoning Challenge](https://huggingface.co/datasets/ai2_arc) | Legacy Download | | [OpenBookQA](https://github.com/allenai/OpenBookQA) | Legacy Download | | [MMLU Auxiliary Train](https://huggingface.co/datasets/cais/mmlu/viewer/all/auxiliary_train) | Legacy Download | | [social-chemestry-101](https://huggingface.co/datasets/tasksource/social-chemestry-101) | Legacy Download | | [Moral Stories](https://huggingface.co/datasets/demelin/moral_stories) | Legacy Download | | [The Common Pile v0.1](https://huggingface.co/common-pile) | Legacy Download | | [FineMath](https://huggingface.co/datasets/HuggingFaceTB/finemath) | Legacy Download | | [MegaMath](https://huggingface.co/datasets/LLM360/MegaMath) | Legacy Download | | [FastChat](https://github.com/lm-sys/FastChat) | 6/30/2025 | ## Private Non-publicly Accessible Datasets of Third Parties | Dataset | | :---- | | Global Regulation | | Workbench | ## Online Dataset Sources The English Common Crawl data was downloaded from the Common Crawl Foundation (see their [FAQ](https://commoncrawl.org/faq) for details on their crawling) and includes the snapshots CC-MAIN-2013-20 through CC-MAIN-2025-13. The data was subsequently deduplicated and filtered in various ways described in the [Nemotron-CC paper](https://arxiv.org/abs/2412.02595). Additionally, we extracted data for fifteen languages from the following three Common Crawl snapshots: CC-MAIN-2024-51, CC-MAIN-2025-08, CC-MAIN-2025-18. The fifteen languages included were Arabic, Chinese, Danish, Dutch, French, German, Italian, Japanese, Korean, Polish, Portuguese, Russian, Spanish, Swedish, and Thai. As we did not have reliable multilingual model-based quality classifiers available, we applied just heuristic filtering instead—similar to what we did for lower quality English data in the Nemotron-CC pipeline, but selectively removing some filters for some languages that did not work well. Deduplication was done in the same way as for Nemotron-CC. The GitHub Crawl was collected using the GitHub REST API and the Amazon S3 API. Each crawl was operated in accordance with the rate limits set by its respective source, either GitHub or S3. We collect raw source code and subsequently remove any having a license which does not exist in our permissive-license set (for additional details, refer to the technical report). | Dataset | Modality | Dataset Size (Tokens) | Collection Period | | :---- | :---- | :---- | :---- | | English Common Crawl | Text | 3.360T | 4/8/2025 | | Multilingual Common Crawl | Text | 812.7B | 5/1/2025 | | GitHub Crawl | Text | 747.4B | 4/29/2025 | ## NVIDIA-Sourced Synthetic Datasets | Dataset | Modality | Dataset Size (Tokens) | Seed Dataset | Model(s) used for generation | | :---- | :---- | :---- | :---- | :---- | | Synthetic Art of Problem Solving from DeepSeek-R1 | Text | 25.5B | [Art of Problem Solving](https://artofproblemsolving.com/company); [American Mathematics Competitions 8](https://artofproblemsolving.com/wiki/index.php/AMC_8_Problems_and_Solutions); [American Mathematics Competitions 10](https://artofproblemsolving.com/wiki/index.php/AMC_10_Problems_and_Solutions); | [DeepSeek-R1](https://huggingface.co/deepseek-ai/DeepSeek-R1) | | Synthetic Moral Stories and Social Chemistry from Mixtral-8x22B-v0.1 | Text | 327M | [social-chemestry-101](https://huggingface.co/datasets/tasksource/social-chemestry-101); [Moral Stories](https://huggingface.co/datasets/demelin/moral_stories) | [Mixtral-8x22B-v0.1](https://huggingface.co/mistralai/Mixtral-8x22B-v0.1) | | Synthetic Social Sciences seeded with OpenStax from DeepSeek-V3, Mixtral-8x22B-v0.1, and Qwen2.5-72B | Text | 83.6M | [OpenStax \- CC BY-SA subset](https://openstax.org/) | [DeepSeek-V3](https://huggingface.co/deepseek-ai/DeepSeek-V3); [Mixtral-8x22B-v0.1](https://huggingface.co/mistralai/Mixtral-8x22B-v0.1); [Qwen2.5-72B](https://huggingface.co/Qwen/Qwen2.5-72B) | | Synthetic Health Sciences seeded with OpenStax from DeepSeek-V3, Mixtral-8x22B-v0.1, and Qwen2.5-72B | Text | 9.7M | [OpenStax \- CC BY-SA subset](https://openstax.org/) | [DeepSeek-V3](https://huggingface.co/deepseek-ai/DeepSeek-V3); [Mixtral-8x22B-v0.1](https://huggingface.co/mistralai/Mixtral-8x22B-v0.1); [Qwen2.5-72B](https://huggingface.co/Qwen/Qwen2.5-72B) | | Synthetic STEM seeded with OpenStax, Open Textbook Library, and GSM8K from DeepSeek-R1, DeepSeek-V3, DeepSeek-V3-0324, and Qwen2.5-72B | Text | 175M | [OpenStax \- CC BY-SA subset](https://openstax.org/); [GSM8K](https://github.com/openai/grade-school-math); [Open Textbook Library \- CC BY-SA & GNU subset](https://open.umn.edu/opentextbooks/textbooks/) | [DeepSeek-R1](https://huggingface.co/deepseek-ai/DeepSeek-R1), [DeepSeek-V3](https://huggingface.co/deepseek-ai/DeepSeek-V3); [DeepSeek-V3-0324](https://huggingface.co/deepseek-ai/DeepSeek-V3-0324); [Qwen2.5-72B](https://huggingface.co/Qwen/Qwen2.5-72B) | | [Nemotron-PrismMath](https://huggingface.co/datasets/nvidia/Nemotron-PrismMath) | Text | 4.6B | [Big-Math-RL-Verified](https://huggingface.co/datasets/SynthLabsAI/Big-Math-RL-Verified); [OpenR1-Math-220k](https://huggingface.co/datasets/open-r1/OpenR1-Math-220k) | [Qwen2.5-0.5B-instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct), [Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct); [DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) | | Synthetic Question Answering Data from Papers and Permissible Books from Qwen2.5-72B-Instruct | Text | 350M | [arXiv](https://info.arxiv.org/help/bulk_data/index.html); [National Institutes of Health ExPorter](https://www.nih.gov/); [BioRxiv](https://www.biorxiv.org/tdm); [PMC Article](https://pmc.ncbi.nlm.nih.gov/tools/textmining/); [USPTO Backgrounds](https://data.uspto.gov/apis/transition-guide/bdss#pats); [peS2o](https://huggingface.co/datasets/allenai/peS2o); Global Regulation; [CORE](https://core.ac.uk/documentation/dataset); [PG-19](https://github.com/google-deepmind/pg19); [DOAB CC BY & CC BY-SA subset](https://www.doabooks.org/en); [NDLTD](https://ndltd.org/thesis-resources/global-etd-search/) | [Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct) | | Synthetic FineMath-4+ Reprocessed from DeepSeek-V3 | Text | 9.2B | [Common Crawl](https://commoncrawl.org/latest-crawl) | [DeepSeek-V3](https://huggingface.co/deepseek-ai/DeepSeek-V3) | | Synthetic FineMath-3+ Reprocessed from phi-4 | Text | 27.6B | [Common Crawl](https://commoncrawl.org/latest-crawl) | [phi-4](https://huggingface.co/microsoft/phi-4) | | Synthetic Union-3+ Reprocessed from phi-4 | Text | 93.1B | [Common Crawl](https://commoncrawl.org/latest-crawl) | [phi-4](https://huggingface.co/microsoft/phi-4) | | Refreshed [Nemotron-MIND](https://huggingface.co/datasets/nvidia/Nemotron-MIND) from phi-4 | Text | 73B | [Common Crawl](https://commoncrawl.org/latest-crawl) | [phi-4](https://huggingface.co/microsoft/phi-4) | | Synthetic Union-4+ Reprocessed from phi-4 | Text | 14.12B | [Common Crawl](https://commoncrawl.org/latest-crawl) | [phi-4](https://huggingface.co/microsoft/phi-4) | | Synthetic Union-3+ minus 4+ Reprocessed from phi-4 | Text | 78.95B | [Common Crawl](https://commoncrawl.org/latest-crawl) | [phi-4](https://huggingface.co/microsoft/phi-4) | | Synthetic Union-3 Refreshed from phi-4 | Text | 80.94B | [Common Crawl](https://commoncrawl.org/latest-crawl) | [phi-4](https://huggingface.co/microsoft/phi-4) | | Synthetic Union-4+ Refreshed from phi-4 | Text | 52.32B | [Common Crawl](https://commoncrawl.org/latest-crawl) | [phi-4](https://huggingface.co/microsoft/phi-4) | | Synthetic AGIEval seeded with AQUA-RAT, LogiQA, and AR-LSAT from DeepSeek-V3 and DeepSeek-V3-0324 | Text | 4.0B | [AQUA-RAT](https://huggingface.co/datasets/deepmind/aqua_rat); [LogiQA](https://huggingface.co/datasets/lucasmccabe/logiqa); [AR-LSAT](https://github.com/zhongwanjun/AR-LSAT) | [DeepSeek-V3](https://huggingface.co/deepseek-ai/DeepSeek-V3); [DeepSeek-V3-0324](https://huggingface.co/deepseek-ai/DeepSeek-V3-0324) | | Synthetic AGIEval seeded with AQUA-RAT, LogiQA, and AR-LSAT from Qwen3-30B-A3B | Text | 4.2B | [AQUA-RAT](https://huggingface.co/datasets/deepmind/aqua_rat); [LogiQA](https://huggingface.co/datasets/lucasmccabe/logiqa); [AR-LSAT](https://github.com/zhongwanjun/AR-LSAT) | [Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B) | | Synthetic Art of Problem Solving from Qwen2.5-32B-Instruct, Qwen2.5-Math-72B, Qwen2.5-Math-7B, and Qwen2.5-72B-Instruct | Text | 83.1B | [Art of Problem Solving](https://artofproblemsolving.com/company); [American Mathematics Competitions 8](https://artofproblemsolving.com/wiki/index.php/AMC_8_Problems_and_Solutions); [American Mathematics Competitions 10](https://artofproblemsolving.com/wiki/index.php/AMC_10_Problems_and_Solutions); [GSM8K](https://github.com/openai/grade-school-math); [PRM800K](https://github.com/openai/prm800k) | [Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct); [Qwen2.5-Math-72B](https://huggingface.co/Qwen/Qwen2.5-Math-72B); [Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B); [Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct) | | Synthetic MMLU Auxiliary Train from DeepSeek-R1 | Text | 0.5B | [MMLU Auxiliary Train](https://huggingface.co/datasets/cais/mmlu/viewer/all/auxiliary_train) | [DeepSeek-R1](https://huggingface.co/deepseek-ai/DeepSeek-R1) | | Synthetic Long Context Continued Post-Training Data from Papers and Permissible Books from Qwen2.5-72B-Instruct | Text | 5.4B | [arXiv](https://info.arxiv.org/help/bulk_data/index.html); [National Institutes of Health ExPorter](https://www.nih.gov/); [BioRxiv](https://www.biorxiv.org/tdm); [PMC Article](https://pmc.ncbi.nlm.nih.gov/tools/textmining/); [USPTO Backgrounds](https://data.uspto.gov/apis/transition-guide/bdss#pats); [peS2o](https://huggingface.co/datasets/allenai/peS2o); Global Regulation; [CORE](https://core.ac.uk/documentation/dataset); [PG-19](https://github.com/google-deepmind/pg19); [DOAB CC BY & CC BY-SA subset](https://www.doabooks.org/en); [NDLTD](https://ndltd.org/thesis-resources/global-etd-search/) | [Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct) | | Synthetic Common Crawl from Qwen3-30B-A3B and Mistral-Nemo-12B-Instruct | Text | 1.949T | [Common Crawl](https://commoncrawl.org/) | [Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B); [Mistral-NeMo-12B-Instruct](https://huggingface.co/nvidia/Mistral-NeMo-12B-Instruct) | | Synthetic Multilingual Data from Common Crawl from Qwen3-30B-A3B | Text | 997.3B | [Common Crawl](https://commoncrawl.org/) | [Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B) | | Synthetic Multilingual Data from Wikimedia from Qwen3-30B-A3B | Text | 55.1B | [Wikimedia](https://dumps.wikimedia.org/) | [Qwen3-30B-A3B](https://huggingface.co/Qwen/Qwen3-30B-A3B) | | Synthetic OpenMathReasoning from DeepSeek-R1-0528 | Text | 1.5M | [OpenMathReasoning](https://huggingface.co/datasets/nvidia/OpenMathReasoning) | [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528) | | Synthetic OpenCodeReasoning from DeepSeek-R1-0528 | Text | 1.1M | [OpenCodeReasoning](https://huggingface.co/datasets/nvidia/OpenCodeReasoning) | [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528) | | Synthetic Science Data from DeepSeek-R1-0528 | Text | 1.5M | \- | [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528) | | Synthetic Humanity's Last Exam from DeepSeek-R1-0528 | Text | 460K | [Humanity's Last Exam](https://huggingface.co/datasets/cais/hle) | [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528) | | Synthetic ToolBench from Qwen3-235B-A22B | Text | 400K | [ToolBench](https://github.com/OpenBMB/ToolBench) | [Qwen3-235B-A22B](https://huggingface.co/Qwen/Qwen3-235B-A22B) | | Synthetic Nemotron Content Safety Dataset V2, eval-safety, Gretel Synthetic Safety Alignment, and RedTeam\_2K from DeepSeek-R1-0528 | Text | 52K | [Nemotron Content Safety Dataset V2](https://huggingface.co/datasets/nvidia/Aegis-AI-Content-Safety-Dataset-2.0); [eval-safety](https://github.com/CrystalEye42/eval-safety/blob/main/malicious_tasks_dataset.yaml); [Gretel Synthetic Safety Alignment](https://huggingface.co/datasets/gretelai/gretel-safety-alignment-en-v1); [RedTeam\_2K](https://huggingface.co/datasets/JailbreakV-28K/JailBreakV-28k/viewer/RedTeam_2K) | [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528) | | Synthetic HelpSteer from Qwen3-235B-A22B | Text | 120K | [HelpSteer3](https://huggingface.co/datasets/nvidia/HelpSteer3); [HelpSteer2](https://huggingface.co/datasets/nvidia/HelpSteer2) | [Qwen3-235B-A22B](https://huggingface.co/Qwen/Qwen3-235B-A22B) | | Synthetic Alignment data from Mixtral-8x22B-Instruct-v0.1, Mixtral-8x7B-Instruct-v0.1, and Nemotron-4 Family | Text | 400K | [HelpSteer2](https://huggingface.co/datasets/nvidia/HelpSteer2); [C4](https://huggingface.co/datasets/allenai/c4); [LMSYS-Chat-1M](https://huggingface.co/datasets/lmsys/lmsys-chat-1m); [ShareGPT52K](https://huggingface.co/datasets/RyokoAI/ShareGPT52K); [tigerbot-kaggle-leetcodesolutions-en-2k](https://huggingface.co/datasets/TigerResearch/tigerbot-kaggle-leetcodesolutions-en-2k); [GSM8K](https://github.com/openai/grade-school-math); [PRM800K](https://github.com/openai/prm800k); lm\_identity (NVIDIA internal); [FinQA](https://finqasite.github.io/); [WikiTableQuestions](https://huggingface.co/datasets/wikitablequestions); [Riddles](https://github.com/crawsome/riddles); ChatQA nvolve-multiturn (NVIDIA internal); [glaive-function-calling-v2](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2); [SciBench](https://github.com/mandyyyyii/scibench); [OpenBookQA](https://github.com/allenai/OpenBookQA); [Advanced Reasoning Benchmark](https://github.com/TheDuckAI/arb); [Public Software Heritage S3](https://docs.softwareheritage.org/devel/swh-export/graph/dataset.html#summary-of-dataset-versions); [Khan Academy Math Keywords](https://www.khanacademy.org/math) | Nemotron-4-15B-Base (NVIDIA internal); Nemotron-4-15B-Instruct (NVIDIA internal); [Nemotron-4-340B-Base](https://huggingface.co/nvidia/Nemotron-4-340B-Base); [Nemotron-4-340B-Instruct](https://huggingface.co/nvidia/Nemotron-4-340B-Instruct); [Nemotron-4-340B-Reward](https://huggingface.co/nvidia/Nemotron-4-340B-Reward); [Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1); [Mixtral-8x22B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x22B-Instruct-v0.1) | | Synthetic LMSYS-Chat-1M from Qwen3-235B-A22B | Text | 1M | [LMSYS-Chat-1M](https://huggingface.co/datasets/lmsys/lmsys-chat-1m) | [Qwen3-235B-A22B](https://huggingface.co/Qwen/Qwen3-235B-A22B) | | Synthetic Multilingual Reasoning data from DeepSeek-R1-0528, Qwen2.5-32B-Instruct-AWQ, and Qwen2.5-14B-Instruct | Text | 25M | [OpenMathReasoning](https://huggingface.co/datasets/nvidia/OpenMathReasoning); [OpenCodeReasoning](https://huggingface.co/datasets/nvidia/OpenCodeReasoning) | [DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528); [Qwen2.5-32B-Instruct-AWQ](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct-AWQ) (translation); [Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) (translation); | | Synthetic Multilingual Reasoning data from Qwen3-235B-A22B and Gemma 3 Post-Trained models | Text | 5M | [WildChat](https://huggingface.co/datasets/allenai/WildChat-1M) | [Qwen3-235B-A22B](https://huggingface.co/Qwen/Qwen3-235B-A22B); [Gemma 3 PT 12B](https://huggingface.co/google/gemma-3-12b-it); [Gemma 3 PT 27B](https://huggingface.co/google/gemma-3-27b-it) | ### Evaluation Dataset: * Data Collection Method by dataset: Hybrid: Human, Synthetic * Labeling Method by dataset: Hybrid: Automated, Human, Synthetic ## Inference - ## Engines: HF, vLLM, TRT-LLM - ## Test Hardware NVIDIA A10G 24GB, H100 80GB ## Ethical Considerations NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our [Trustworthy AI terms of service](https://www.nvidia.com/en-us/agreements/trustworthy-ai/terms/), developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse. For more detailed information on ethical considerations for this model, please see the Model Card++ [Bias](./bias.md), [Explainability](./explainability.md), [Safety & Security](./safety.md), and [Privacy](./privacy.md) Subcards. Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/). ## Citation ``` @misc{nvidia2025nvidianemotronnano2, title={NVIDIA Nemotron Nano 2: An Accurate and Efficient Hybrid Mamba-Transformer Reasoning Model}, author={NVIDIA}, year={2025}, eprint={2508.14444}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2508.14444}, } ``` <!--End Original Model Card--> --- # <span id="testllm" style="color: #7F7FFF;">🚀 If you find these models useful</span> Help me test my **AI-Powered Quantum Network Monitor Assistant** with **quantum-ready security checks**: 👉 [Quantum Network Monitor](https://readyforquantum.com/?assistant=open&utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme) The full Open Source Code for the Quantum Network Monitor Service available at my github repos ( repos with NetworkMonitor in the name) : [Source Code Quantum Network Monitor](https://github.com/Mungert69). You will also find the code I use to quantize the models if you want to do it yourself [GGUFModelBuilder](https://github.com/Mungert69/GGUFModelBuilder) 💬 **How to test**: Choose an **AI assistant type**: - `TurboLLM` (GPT-4.1-mini) - `HugLLM` (Hugginface Open-source models) - `TestLLM` (Experimental CPU-only) ### **What I’m Testing** I’m pushing the limits of **small open-source models for AI network monitoring**, specifically: - **Function calling** against live network services - **How small can a model go** while still handling: - Automated **Nmap security scans** - **Quantum-readiness checks** - **Network Monitoring tasks** 🟡 **TestLLM** – Current experimental model (llama.cpp on 2 CPU threads on huggingface docker space): - ✅ **Zero-configuration setup** - ⏳ 30s load time (slow inference but **no API costs**) . No token limited as the cost is low. - 🔧 **Help wanted!** If you’re into **edge-device AI**, let’s collaborate! ### **Other Assistants** 🟢 **TurboLLM** – Uses **gpt-4.1-mini** : - **It performs very well but unfortunatly OpenAI charges per token. For this reason tokens usage is limited. - **Create custom cmd processors to run .net code on Quantum Network Monitor Agents** - **Real-time network diagnostics and monitoring** - **Security Audits** - **Penetration testing** (Nmap/Metasploit) 🔵 **HugLLM** – Latest Open-source models: - 🌐 Runs on Hugging Face Inference API. Performs pretty well using the lastest models hosted on Novita. ### 💡 **Example commands you could test**: 1. `"Give me info on my websites SSL certificate"` 2. `"Check if my server is using quantum safe encyption for communication"` 3. `"Run a comprehensive security audit on my server"` 4. '"Create a cmd processor to .. (what ever you want)" Note you need to install a [Quantum Network Monitor Agent](https://readyforquantum.com/Download/?utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme) to run the .net code on. This is a very flexible and powerful feature. Use with caution! ### Final Word I fund the servers used to create these model files, run the Quantum Network Monitor service, and pay for inference from Novita and OpenAI—all out of my own pocket. All the code behind the model creation and the Quantum Network Monitor project is [open source](https://github.com/Mungert69). Feel free to use whatever you find helpful. If you appreciate the work, please consider [buying me a coffee](https://www.buymeacoffee.com/mahadeva) ☕. Your support helps cover service costs and allows me to raise token limits for everyone. I'm also open to job opportunities or sponsorship. Thank you! 😊
bah63843/blockassist-bc-plump_fast_antelope_1757073450
bah63843
2025-09-05T11:58:39Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "plump fast antelope", "arxiv:2504.07091", "region:us" ]
null
2025-09-05T11:58:30Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - plump fast antelope --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ashishscapsitech123/qwen2.5_3b_4bit_8600_full_finetuned_test
ashishscapsitech123
2025-09-05T11:53:50Z
0
0
transformers
[ "transformers", "safetensors", "qwen2_5_vl", "image-to-text", "text-generation-inference", "unsloth", "trl", "en", "base_model:unsloth/Qwen2.5-VL-3B-Instruct-unsloth-bnb-4bit", "base_model:quantized:unsloth/Qwen2.5-VL-3B-Instruct-unsloth-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
image-to-text
2025-09-05T11:52:05Z
--- base_model: unsloth/Qwen2.5-VL-3B-Instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2_5_vl - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** ashishscapsitech123 - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2.5-VL-3B-Instruct-unsloth-bnb-4bit This qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
boahancock/blockassist-bc-iridescent_rapid_toad_1757072839
boahancock
2025-09-05T11:53:12Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "iridescent rapid toad", "arxiv:2504.07091", "region:us" ]
null
2025-09-05T11:48:46Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - iridescent rapid toad --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
KevinZonda/MedSPO-3B
KevinZonda
2025-09-05T11:45:23Z
18
2
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "en", "dataset:KevinZonda/PubMed-IV", "dataset:KevinZonda/PM4-V3-SPO", "base_model:Qwen/Qwen2.5-3B-Instruct", "base_model:finetune:Qwen/Qwen2.5-3B-Instruct", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-18T06:22:28Z
--- language: - en base_model: - Qwen/Qwen2.5-3B-Instruct pipeline_tag: text-generation datasets: - KevinZonda/PubMed-IV - KevinZonda/PM4-V3-SPO library_name: transformers license: apache-2.0 --- # MedSPO-3B MedSPO-7B is a fine-tuned [Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) model specifically designed for biomedical subject-predicate-object (SPO) extraction tasks. This model is trained on the [PubMed-IV](https://huggingface.co/datasets/KevinZonda/PubMed-IV) dataset using SPO extraction knowledge distilled from [DeepSeek-V3-0324](https://huggingface.co/deepseek-ai/DeepSeek-V3-0324). ## Magic Prompt System Prompt: ```plain You are a biomedical specialist. You are given one paper (title, abstract, conclusion). Extract all biomedical-related Subject-Predicate-Object (SPO) Triple in valid JSON format wrapped in <output> tag. ``` User Prompt: ```xml <input> <title></title> <abstract></abstract> <conclusion></conclusion> </input> ```
Sahildo/blockassist-bc-sizable_lanky_owl_1757072646
Sahildo
2025-09-05T11:44:51Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "sizable lanky owl", "arxiv:2504.07091", "region:us" ]
null
2025-09-05T11:44:47Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - sizable lanky owl --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
DapaoZeng/ddpm-celebahq-finetuned-butterflies-2epochs
DapaoZeng
2025-09-05T11:44:41Z
0
0
diffusers
[ "diffusers", "safetensors", "pytorch", "unconditional-image-generation", "diffusion-models-class", "license:mit", "diffusers:DDPMPipeline", "region:us" ]
unconditional-image-generation
2025-09-05T11:44:29Z
--- license: mit tags: - pytorch - diffusers - unconditional-image-generation - diffusion-models-class --- # Example Fine-Tuned Model for Unit 2 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) Describe your model here ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('DapaoZeng/ddpm-celebahq-finetuned-butterflies-2epochs') image = pipeline().images[0] image ```
KevinZonda/MedSPO-7B
KevinZonda
2025-09-05T11:44:41Z
23
0
transformers
[ "transformers", "safetensors", "text-generation", "conversational", "en", "dataset:KevinZonda/PubMed-IV", "dataset:KevinZonda/PM4-V3-SPO", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-7B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-generation
2025-07-27T17:45:58Z
--- language: - en base_model: - Qwen/Qwen2.5-7B-Instruct pipeline_tag: text-generation datasets: - KevinZonda/PubMed-IV - KevinZonda/PM4-V3-SPO library_name: transformers license: apache-2.0 --- # MedSPO-7B MedSPO-7B is a fine-tuned [Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) model specifically designed for biomedical subject-predicate-object (SPO) extraction tasks. This model is trained on the [PubMed-IV](https://huggingface.co/datasets/KevinZonda/PubMed-IV) dataset using SPO extraction knowledge distilled from [DeepSeek-V3-0324](https://huggingface.co/deepseek-ai/DeepSeek-V3-0324). ## Magic Prompt System Prompt: ```plain You are a biomedical specialist. You are given one paper (title, abstract, conclusion). Extract all biomedical-related Subject-Predicate-Object (SPO) Triple in valid JSON format wrapped in <output> tag. ``` User Prompt: ```xml <input> <title></title> <abstract></abstract> <conclusion></conclusion> </input> ```
knightluffy/qwen34bfine1
knightluffy
2025-09-05T11:43:40Z
0
0
transformers
[ "transformers", "safetensors", "gguf", "llama", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "8-bit", "region:us" ]
text-generation
2025-09-05T07:08:14Z
--- base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** knightluffy - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
dsagasdgds/blockassist-bc-unseen_camouflaged_komodo_1757071210
dsagasdgds
2025-09-05T11:41:24Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "unseen camouflaged komodo", "arxiv:2504.07091", "region:us" ]
null
2025-09-05T11:41:10Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - unseen camouflaged komodo --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
cactus-S/blockassist-bc-reclusive_arctic_panther_1757070969
cactus-S
2025-09-05T11:39:30Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "reclusive arctic panther", "arxiv:2504.07091", "region:us" ]
null
2025-09-05T11:39:27Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - reclusive arctic panther --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
kafa22/blockassist-bc-regal_leggy_hummingbird_1757072314
kafa22
2025-09-05T11:39:15Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "regal leggy hummingbird", "arxiv:2504.07091", "region:us" ]
null
2025-09-05T11:39:11Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - regal leggy hummingbird --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mradermacher/AnubisLemonade-70B-v1.1-GGUF
mradermacher
2025-09-05T11:38:15Z
337
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:ockerman0/AnubisLemonade-70B-v1.1", "base_model:quantized:ockerman0/AnubisLemonade-70B-v1.1", "endpoints_compatible", "region:us", "conversational" ]
null
2025-09-05T03:14:00Z
--- base_model: ockerman0/AnubisLemonade-70B-v1.1 language: - en library_name: transformers mradermacher: readme_rev: 1 quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> <!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS --> <!-- ### quants_skip: --> <!-- ### skip_mmproj: --> static quants of https://huggingface.co/ockerman0/AnubisLemonade-70B-v1.1 <!-- provided-files --> ***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#AnubisLemonade-70B-v1.1-GGUF).*** weighted/imatrix quants are available at https://huggingface.co/mradermacher/AnubisLemonade-70B-v1.1-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/AnubisLemonade-70B-v1.1-GGUF/resolve/main/AnubisLemonade-70B-v1.1.Q2_K.gguf) | Q2_K | 26.5 | | | [GGUF](https://huggingface.co/mradermacher/AnubisLemonade-70B-v1.1-GGUF/resolve/main/AnubisLemonade-70B-v1.1.Q3_K_S.gguf) | Q3_K_S | 31.0 | | | [GGUF](https://huggingface.co/mradermacher/AnubisLemonade-70B-v1.1-GGUF/resolve/main/AnubisLemonade-70B-v1.1.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/AnubisLemonade-70B-v1.1-GGUF/resolve/main/AnubisLemonade-70B-v1.1.Q3_K_L.gguf) | Q3_K_L | 37.2 | | | [GGUF](https://huggingface.co/mradermacher/AnubisLemonade-70B-v1.1-GGUF/resolve/main/AnubisLemonade-70B-v1.1.IQ4_XS.gguf) | IQ4_XS | 38.4 | | | [GGUF](https://huggingface.co/mradermacher/AnubisLemonade-70B-v1.1-GGUF/resolve/main/AnubisLemonade-70B-v1.1.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/AnubisLemonade-70B-v1.1-GGUF/resolve/main/AnubisLemonade-70B-v1.1.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/AnubisLemonade-70B-v1.1-GGUF/resolve/main/AnubisLemonade-70B-v1.1.Q5_K_S.gguf) | Q5_K_S | 48.8 | | | [GGUF](https://huggingface.co/mradermacher/AnubisLemonade-70B-v1.1-GGUF/resolve/main/AnubisLemonade-70B-v1.1.Q5_K_M.gguf) | Q5_K_M | 50.0 | | | [PART 1](https://huggingface.co/mradermacher/AnubisLemonade-70B-v1.1-GGUF/resolve/main/AnubisLemonade-70B-v1.1.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/AnubisLemonade-70B-v1.1-GGUF/resolve/main/AnubisLemonade-70B-v1.1.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality | | [PART 1](https://huggingface.co/mradermacher/AnubisLemonade-70B-v1.1-GGUF/resolve/main/AnubisLemonade-70B-v1.1.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/AnubisLemonade-70B-v1.1-GGUF/resolve/main/AnubisLemonade-70B-v1.1.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
abhi6007/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-striped_gliding_antelope
abhi6007
2025-09-05T11:38:11Z
13
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am striped_gliding_antelope", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-07-06T14:22:19Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am striped_gliding_antelope --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AnerYubo/blockassist-bc-shaggy_melodic_cobra_1757072278
AnerYubo
2025-09-05T11:38:01Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "shaggy melodic cobra", "arxiv:2504.07091", "region:us" ]
null
2025-09-05T11:37:58Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - shaggy melodic cobra --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
kojeklollipop/blockassist-bc-spotted_amphibious_stork_1757070503
kojeklollipop
2025-09-05T11:37:37Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "spotted amphibious stork", "arxiv:2504.07091", "region:us" ]
null
2025-09-05T11:37:33Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - spotted amphibious stork --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
raihannabiil/blockassist-bc-humming_rugged_viper_1757070092
raihannabiil
2025-09-05T11:36:57Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "humming rugged viper", "arxiv:2504.07091", "region:us" ]
null
2025-09-05T11:36:50Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - humming rugged viper --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
giovannidemuri/llama3b-llama8b-er-v584-seed2-seed2-hx-openmath-fpt
giovannidemuri
2025-09-05T11:34:33Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-09-05T10:02:43Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kiok1250/blockassist-bc-beaked_insectivorous_lobster_1757071955
kiok1250
2025-09-05T11:33:11Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "beaked insectivorous lobster", "arxiv:2504.07091", "region:us" ]
null
2025-09-05T11:33:02Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - beaked insectivorous lobster --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
natukundaphiionah/qwen3-14b-sunflower-20250905
natukundaphiionah
2025-09-05T11:32:12Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen3", "trl", "en", "base_model:jq/qwen3-14b-ug40-pretrained", "base_model:finetune:jq/qwen3-14b-ug40-pretrained", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-09-05T11:31:58Z
--- base_model: jq/qwen3-14b-ug40-pretrained tags: - text-generation-inference - transformers - unsloth - qwen3 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** natukundaphiionah - **License:** apache-2.0 - **Finetuned from model :** jq/qwen3-14b-ug40-pretrained This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
kiok1250/blockassist-bc-beaked_insectivorous_lobster_1757071734
kiok1250
2025-09-05T11:29:30Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "beaked insectivorous lobster", "arxiv:2504.07091", "region:us" ]
null
2025-09-05T11:29:20Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - beaked insectivorous lobster --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
kafa22/blockassist-bc-regal_leggy_hummingbird_1757071717
kafa22
2025-09-05T11:29:18Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "regal leggy hummingbird", "arxiv:2504.07091", "region:us" ]
null
2025-09-05T11:29:14Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - regal leggy hummingbird --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
hamedkharazmi/blockassist-bc-tough_webbed_hamster_1757069695
hamedkharazmi
2025-09-05T11:26:45Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "tough webbed hamster", "arxiv:2504.07091", "region:us" ]
null
2025-09-05T11:26:43Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - tough webbed hamster --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
AnerYubo/blockassist-bc-dormant_strong_badger_1757071491
AnerYubo
2025-09-05T11:24:54Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "dormant strong badger", "arxiv:2504.07091", "region:us" ]
null
2025-09-05T11:24:51Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - dormant strong badger --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Zhouyonghao/Qwen3-lora_model
Zhouyonghao
2025-09-05T11:22:38Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen3", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-09-05T11:21:24Z
--- base_model: unsloth/qwen3-14b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen3 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Zhouyonghao - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen3-14b-unsloth-bnb-4bit This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
DTebias/Qwen3-0.6B-Gensyn-Swarm-hoarse_muscular_cassowary
DTebias
2025-09-05T11:22:08Z
20
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am hoarse_muscular_cassowary", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-06-25T21:53:00Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am hoarse_muscular_cassowary --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]