modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-06 18:27:02
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
544 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-06 18:26:43
card
stringlengths
11
1.01M
phospho-app/svitv-ACT_BBOX-Cub777-nigap
phospho-app
2025-08-13T11:54:23Z
0
0
phosphobot
[ "phosphobot", "safetensors", "act", "robotics", "dataset:phospho-app/Cub777_bboxes", "region:us" ]
robotics
2025-08-13T11:31:12Z
--- datasets: phospho-app/Cub777_bboxes library_name: phosphobot pipeline_tag: robotics model_name: act tags: - phosphobot - act task_categories: - robotics --- # act Model - phospho Training Pipeline ## This model was trained using **phospho**. Training was successful, try it out on your robot! ## Training parameters: - **Dataset**: [phospho-app/Cub777_bboxes](https://huggingface.co/datasets/phospho-app/Cub777_bboxes) - **Wandb run URL**: None - **Epochs**: None - **Batch size**: 100 - **Training steps**: 10000 📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme) 🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
Addax-Data-Science/EUR-DF-v1-3
Addax-Data-Science
2025-08-13T11:53:33Z
0
0
null
[ "region:us" ]
null
2025-08-05T13:39:27Z
--- {} --- This repository contains open-source models redistributed for easy integration with [AddaxAI](https://addaxdatascience.com/addaxai/), hosted by [Addax Data Science](https://addaxdatascience.com/). Each model retains its original license (see license files) and attribution. We comply with all original license terms. Users must review and comply with individual model licenses before use. See below for detailed model information including original sources, licenses, and attributions. <strong>Developer</strong> The Deepfaune initiative <strong>Links</strong> <ul> <li><a href="https://www.deepfaune.cnrs.fr/en/">Learn more</a></li> <li><a href="https://link.springer.com/article/10.1007/s10344-023-01742-7">Cite</a></li> <li><a href="https://creativecommons.org/licenses/by-sa/4.0/">License</a></li> </ul>
SP4ND4N/lora_model
SP4ND4N
2025-08-13T11:53:00Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "en", "base_model:unsloth/SmolLM2-135M", "base_model:finetune:unsloth/SmolLM2-135M", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2025-08-13T11:52:30Z
--- base_model: unsloth/SmolLM2-135M tags: - text-generation-inference - transformers - unsloth - llama - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** SP4ND4N - **License:** apache-2.0 - **Finetuned from model :** unsloth/SmolLM2-135M This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Nullifier00/blockassist-bc-slimy_lanky_bison_1755084449
Nullifier00
2025-08-13T11:49:12Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "slimy lanky bison", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T11:49:05Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - slimy lanky bison --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
AltinAziziNovomind/merged-qwen-3b-coder-instuct-v10
AltinAziziNovomind
2025-08-13T11:44:53Z
0
0
null
[ "safetensors", "qwen2", "license:apache-2.0", "region:us" ]
null
2025-08-13T11:23:31Z
--- license: apache-2.0 ---
Okilzhon/smart
Okilzhon
2025-08-13T11:43:14Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-08-13T11:43:14Z
--- license: apache-2.0 ---
fatepurriyaz/blockassist-bc-aquatic_pawing_pig_1755085279
fatepurriyaz
2025-08-13T11:42:11Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "aquatic pawing pig", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T11:42:07Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - aquatic pawing pig --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mosesshah/blockassist-bc-thorny_mighty_macaque_1755084888
mosesshah
2025-08-13T11:37:31Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thorny mighty macaque", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T11:37:28Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thorny mighty macaque --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
fatepurriyaz/blockassist-bc-aquatic_pawing_pig_1755084868
fatepurriyaz
2025-08-13T11:35:02Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "aquatic pawing pig", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T11:34:57Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - aquatic pawing pig --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
johnyzhao/blockassist-bc-slow_dense_leopard_1755084685
johnyzhao
2025-08-13T11:32:29Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "slow dense leopard", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T11:32:02Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - slow dense leopard --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
motza0025/blockassist-bc-darting_mottled_dog_1755083235
motza0025
2025-08-13T11:31:14Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "darting mottled dog", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T11:30:51Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - darting mottled dog --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
koloni/blockassist-bc-deadly_graceful_stingray_1755083025
koloni
2025-08-13T11:29:54Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "deadly graceful stingray", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T11:29:47Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - deadly graceful stingray --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
maciejf/Lena
maciejf
2025-08-13T11:29:04Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-08-13T11:12:58Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: Lena --- # Lena <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `Lena` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "Lena", "lora_weights": "https://huggingface.co/maciejf/Lena/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('maciejf/Lena', weight_name='lora.safetensors') image = pipeline('Lena').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 1000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/maciejf/Lena/discussions) to add images that show off what you’ve made with this LoRA.
Sayemahsjn/blockassist-bc-playful_feline_octopus_1755083438
Sayemahsjn
2025-08-13T11:28:37Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "playful feline octopus", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T11:28:27Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - playful feline octopus --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
aiface/bert-base-multilingual-uncased-sentiment_v3
aiface
2025-08-13T11:28:28Z
0
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:nlptown/bert-base-multilingual-uncased-sentiment", "base_model:finetune:nlptown/bert-base-multilingual-uncased-sentiment", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-08-13T08:09:21Z
--- library_name: transformers license: mit base_model: nlptown/bert-base-multilingual-uncased-sentiment tags: - generated_from_trainer metrics: - accuracy model-index: - name: bert-base-multilingual-uncased-sentiment_v3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-multilingual-uncased-sentiment_v3 This model is a fine-tuned version of [nlptown/bert-base-multilingual-uncased-sentiment](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4987 - Accuracy: 0.9286 - Precision Macro: 0.8226 - Recall Macro: 0.7931 - F1 Macro: 0.8061 - F1 Weighted: 0.9269 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 128 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision Macro | Recall Macro | F1 Macro | F1 Weighted | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------------:|:------------:|:--------:|:-----------:| | 0.3933 | 1.0 | 90 | 0.2349 | 0.9292 | 0.8484 | 0.7197 | 0.7474 | 0.9202 | | 0.2051 | 2.0 | 180 | 0.2166 | 0.9236 | 0.8134 | 0.7619 | 0.7811 | 0.9199 | | 0.1494 | 3.0 | 270 | 0.2369 | 0.9362 | 0.8619 | 0.7775 | 0.8072 | 0.9321 | | 0.1233 | 4.0 | 360 | 0.2290 | 0.9343 | 0.8660 | 0.7894 | 0.8176 | 0.9309 | | 0.0838 | 5.0 | 450 | 0.2490 | 0.9375 | 0.8610 | 0.8200 | 0.8378 | 0.9358 | | 0.0799 | 6.0 | 540 | 0.2579 | 0.9343 | 0.8528 | 0.7977 | 0.8197 | 0.9317 | | 0.0481 | 7.0 | 630 | 0.3494 | 0.9223 | 0.7926 | 0.8252 | 0.8064 | 0.9247 | | 0.0406 | 8.0 | 720 | 0.3154 | 0.9368 | 0.8591 | 0.7986 | 0.8227 | 0.9341 | | 0.032 | 9.0 | 810 | 0.3219 | 0.9305 | 0.8238 | 0.8153 | 0.8194 | 0.9301 | | 0.0333 | 10.0 | 900 | 0.3787 | 0.9286 | 0.8387 | 0.8048 | 0.8198 | 0.9270 | | 0.0278 | 11.0 | 990 | 0.3914 | 0.9311 | 0.8432 | 0.7948 | 0.8148 | 0.9288 | | 0.0165 | 12.0 | 1080 | 0.4155 | 0.9318 | 0.8627 | 0.7830 | 0.8120 | 0.9282 | | 0.0126 | 13.0 | 1170 | 0.4029 | 0.9368 | 0.8550 | 0.8161 | 0.8328 | 0.9352 | | 0.0133 | 14.0 | 1260 | 0.4398 | 0.9324 | 0.8460 | 0.7915 | 0.8134 | 0.9297 | | 0.01 | 15.0 | 1350 | 0.4571 | 0.9318 | 0.8347 | 0.7913 | 0.8094 | 0.9294 | | 0.008 | 16.0 | 1440 | 0.4685 | 0.9299 | 0.8303 | 0.7899 | 0.8070 | 0.9276 | | 0.0058 | 17.0 | 1530 | 0.4846 | 0.9318 | 0.8403 | 0.7954 | 0.8142 | 0.9295 | | 0.0022 | 18.0 | 1620 | 0.4905 | 0.9280 | 0.8249 | 0.7928 | 0.8068 | 0.9262 | | 0.0038 | 19.0 | 1710 | 0.5043 | 0.9299 | 0.8272 | 0.7897 | 0.8057 | 0.9277 | | 0.0015 | 20.0 | 1800 | 0.4987 | 0.9286 | 0.8226 | 0.7931 | 0.8061 | 0.9269 | ### Framework versions - Transformers 4.55.0 - Pytorch 2.7.0+cu126 - Datasets 4.0.0 - Tokenizers 0.21.4
mohitskaushal/gemma3-1b-fp16-f-color
mohitskaushal
2025-08-13T11:18:55Z
0
0
transformers
[ "transformers", "safetensors", "gemma3_text", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-13T11:16:06Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
milanaOG/Qwen3-0.6B-Gensyn-Swarm-hardy_whiskered_quail
milanaOG
2025-08-13T11:16:47Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am hardy_whiskered_quail", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-13T11:16:26Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am hardy_whiskered_quail --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AmanyAzzam/merged_medgemma_spine
AmanyAzzam
2025-08-13T11:15:09Z
0
0
null
[ "safetensors", "gemma3", "license:apache-2.0", "region:us" ]
null
2025-08-13T11:12:04Z
--- license: apache-2.0 ---
0xaoyama/blockassist-bc-muscular_zealous_gorilla_1755083517
0xaoyama
2025-08-13T11:12:25Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "muscular zealous gorilla", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T11:12:18Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - muscular zealous gorilla --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
nokolora/ffxiv-krile
nokolora
2025-08-13T11:11:29Z
74
1
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:OnomaAIResearch/Illustrious-xl-early-release-v0", "base_model:adapter:OnomaAIResearch/Illustrious-xl-early-release-v0", "license:cc0-1.0", "region:us" ]
text-to-image
2025-03-27T10:42:44Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: >- ffxivKrile, solo, lalafell, yellow jacket, brown hair, short hair ,denim pants, aqua eyes, belt, cat hood, necklace, pouch, pocket, blue scarf, single clover earring, full body, boots, looking at viewer, white background, simple background, standing, parameters: negative_prompt: 3d, realistic, output: url: images/ff14-krile-default_ILV10.preview.png - text: >- ffxivkrile, solo, simple white shirt, denim shorts, aqua eyes, brown hair, short hair, ponytail, hair ribbon, (lalafell:0.75), (forehead:0.75), pointy ears, looking at viewer, white background, simple background, standing, full body, parameters: negative_prompt: 3d, realistic, output: url: images/ff14-krile-face_ILV10.preview.png - text: >- ffxivKrile, solo, (lalafell:0.75), yellow robe, brown robe, aqua eyes, (blue dress:0.75), (bow:0.75), gem, cat hood, brown hair, short hair, high collar, wide sleeves, forehead, smile, looking at viewer, white background, simple background, full body, standing, parameters: negative_prompt: 3d, realistic, output: url: images/ff14-krile-original_ILV10.preview.png base_model: OnomaAIResearch/Illustrious-xl-early-release-v0 instance_prompt: null license: cc0-1.0 --- # FFXIV - Krile Maya Baldesion (LoRA) <Gallery /> ## Model description # FFXIV Krile Maya Baldesion LoRA Model Published: 2025-04-01, Updated: 2025-08-18 ## Contents LoRA models for Stable Diffusion, which generates a character that looks like Krile in the story. 1. ff14-krile-default: with her patch 7.x outfits 2. ff14-krile-face: trained only in facial features 3. ff14-krile-original: trained in her original robe style ## Model Type Based on Illustrious-XL V1.0. Please use checkpoint of the same family. ## Trigger Words ### 1. default model > ffxivKrile, solo, lalafell, yellow jacket, brown hair, short hair, denim pants, aqua eyes, belt, cat hood, necklace, pouch, pocket, blue scarf, single clover earring, ### 2. face model > ffxivkrile, solo, white shirt, denim shorts, aqua eyes, brown hair, short hair, ponytail, hair ribbon, (lalafell:0.75), (forehead:0.75), pointy ears, ### 3. original model > ffxivKrile, solo, (lalafell:0.5), yellow robe, brown robe, aqua eyes, (blue dress:0.75), (bow:0.75), gem, cat hood, brown hair, short hair, high collar, wide sleeves, forehead, /*NEGATIVE white dress, blue gem, */ ## Note * The token `lalafell` is **very** important. Adjust the intensity to match your preference. * Her clothing has too many parts xD. Carefully adjust the intensity, especially for tokens containing colors. * If you want to draw a ponytail, move `cat hood` to negative and add `ponytail, hair ribbon`. * If the forehead is hidden by the bangs, adding `forehead:0.5` may work. * The older version is also in a folder. Please follow the history of README.md for usage. ## Download model Weights for this model are available in Safetensors format. [Download](/nokolora/ffxiv-krile/tree/main) them in the Files & versions tab.
forouzanfallah/sentinel_test3_fft_colorloss_caption-t2
forouzanfallah
2025-08-13T11:06:02Z
0
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "diffusers-training", "sd3", "sd3-diffusers", "controlnet", "base_model:stabilityai/stable-diffusion-3-medium-diffusers", "base_model:adapter:stabilityai/stable-diffusion-3-medium-diffusers", "license:openrail++", "region:us" ]
text-to-image
2025-08-13T06:12:48Z
--- base_model: stabilityai/stable-diffusion-3-medium-diffusers library_name: diffusers license: openrail++ inference: true tags: - text-to-image - diffusers-training - diffusers - sd3 - sd3-diffusers - controlnet --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SD3 controlnet-forouzanfallah/sentinel_test3_fft_colorloss_caption-t2 These are controlnet weights trained on stabilityai/stable-diffusion-3-medium-diffusers with new type of conditioning. The weights were trained using [ControlNet](https://github.com/lllyasviel/ControlNet) with the [SD3 diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/controlnet/README_sd3.md). You can find some example images below. prompt: a high-resolution satellite image, sharp details, clear view from space ![images_0)](./images_0.png) Please adhere to the licensing terms as described `[here](https://huggingface.co/stabilityai/stable-diffusion-3-medium/blob/main/LICENSE)`. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
Fyuhei1989/minicompe_20250813
Fyuhei1989
2025-08-13T11:04:59Z
0
0
transformers
[ "transformers", "text-generation-inference", "unsloth", "qwen3", "trl", "en", "base_model:unsloth/Qwen3-4B", "base_model:finetune:unsloth/Qwen3-4B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-08-13T11:04:54Z
--- base_model: unsloth/Qwen3-4B tags: - text-generation-inference - transformers - unsloth - qwen3 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Fyuhei1989 - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen3-4B This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
xiaolinhui/OneRef
xiaolinhui
2025-08-13T11:04:35Z
0
0
null
[ "arxiv:2410.08021", "arxiv:2305.08685", "arxiv:2412.20206", "license:apache-2.0", "region:us" ]
null
2025-08-04T09:45:04Z
--- license: apache-2.0 --- [//]: # (<br />) <p align="center"> <h1 align="center">OneRef: Unified One-tower Expression Grounding and Segmentation with Mask Referring Modeling</h1> <p align="center"> <b> NeurIPS 2024 </b> <br /> <a href="https://scholar.google.com.hk/citations?user=4rTE4ogAAAAJ&hl=zh-CN&oi=sra"><strong> Linhui Xiao </strong></a> · <a href="https://yangxs.ac.cn/home"><strong>Xiaoshan Yang </strong></a> · <a href="https://scholar.google.com.hk/citations?user=HBZ9plsAAAAJ&hl=zh-CN"><strong>Fang Peng </strong></a> · <a href="https://scholar.google.com.hk/citations?user=o_DllmIAAAAJ&hl=zh-CN"><strong>Yaowei Wang </strong></a> · <a href="https://scholar.google.com.hk/citations?user=hI9NRDkAAAAJ&hl=zh-CN"><strong>Changsheng Xu</strong></a> </p> <p align="center"> <a href='https://openreview.net/pdf?id=siPdcro6uD'> <img src='https://img.shields.io/badge/NeurIPS PDF-purple' alt='arXiv PDF'> </a> <a href='https://neurips.cc/virtual/2024/poster/93378'> <img src='https://img.shields.io/badge/NeurIPS Paper Homepage-blue' alt='arXiv PDF'> </a> <a href='https://neurips.cc/media/PosterPDFs/NeurIPS%202024/93378.png?t=1729402553.3015864'> <img src='https://img.shields.io/badge/NeurIPS Poster-lightblue' alt='arXiv PDF'> </a> <a href='https://neurips.cc/media/neurips-2024/Slides/93378_ROahXfO.pdf'> <img src='https://img.shields.io/badge/NeurIPS Slides-lightgreen' alt='arXiv PDF'> </a> <a href='https://arxiv.org/pdf/2410.08021'> <img src='https://img.shields.io/badge/arXiv-PDF-green?style=flat&logo=arXiv&logoColor=green' alt='arXiv PDF'> </a> <br /> <p align="center"> <img src='docs/fig1.jpg' align="center" width="95%"> </p> **<p align="center"> A Comparison between OneRef model and the mainstream REC/RES architectures. </p>** This repository is the official Pytorch implementation for the paper [**OneRef: Unified One-tower Expression Grounding and Segmentation with Mask Referring Modeling**](https://openreview.net/pdf?id=siPdcro6uD) ([Publication](https://proceedings.neurips.cc/paper_files/paper/2024/file/fcd812a51b8f8d05cfea22e3c9c4b369-Paper-Conference.pdf), [Github Code](https://github.com/linhuixiao/OneRef), [HuggingFace model](https://huggingface.co/xiaolinhui/OneRef)), which is an advanced version of our preliminary work **HiVG** ([Publication](https://dl.acm.org/doi/abs/10.1145/3664647.3681071), [Paper](https://openreview.net/pdf?id=NMMyGy1kKZ), [Code](https://github.com/linhuixiao/HiVG)) and **CLIP-VG** ([Publication](https://ieeexplore.ieee.org/abstract/document/10269126), [Paper](https://arxiv.org/pdf/2305.08685), [Code](https://github.com/linhuixiao/CLIP-VG)). If you have any questions, please feel free to open an issue or contact me with emails: <xiaolinhui16@mails.ucas.ac.cn>. Any kind discussions are welcomed! **Please leave a <font color='orange'>STAR ⭐</font> if you like this project!** ## News - :fire: **Update on 2025/07/30:** **All of the code and models have been released!** :exclamation: During the code tidying process, some bugs may arise due to changes in variable names. If any issues occur, please raise them in the [issue page](https://github.com/linhuixiao/OneRef/issues), and I will try to resolve them timely. - :fire: **Update on 2024/12/28: We conducted a Survey of Visual Grounding over the past decade, entitled "Towards Visual Grounding: A Survey" ([Paper](https://arxiv.org/pdf/2412.20206), [Project](https://github.com/linhuixiao/Awesome-Visual-Grounding)), Comments are welcome !!!** - :fire: **Update on 2024/10/10: Our grounding work **OneRef** ([Paper](https://arxiv.org/abs/2410.08021), [Code](https://github.com/linhuixiao/OneRef), [Model](https://huggingface.co/xiaolinhui/OneRef)) has been accepted by the top conference NeurIPS 2024 !** - **Update on 2024/07/16:** **Our grounding work HiVG ([Publication](https://dl.acm.org/doi/abs/10.1145/3664647.3681071), [Paper](https://openreview.net/pdf?id=NMMyGy1kKZ), [Code](https://github.com/linhuixiao/HiVG)) has been accepted by the top conference ACM MM 2024 !** - **Update on 2023/9/25:** **Our grounding work CLIP-VG ([paper](https://ieeexplore.ieee.org/abstract/document/10269126), [Code](https://github.com/linhuixiao/CLIP-VG)) has been accepted by the top journal IEEE Transaction on Multimedia (2023)!** ## Citation If you find our work helpful for your research, please consider citing the following BibTeX entry. ```bibtex @inproceedings{xiao2024oneref, title={OneRef: Unified One-tower Expression Grounding and Segmentation with Mask Referring Modeling}, author={Xiao, Linhui and Yang, Xiaoshan and Peng, Fang and Wang, Yaowei and Xu, Changsheng}, booktitle={Proceedings of the 38th International Conference on Neural Information Processing Systems}, year={2024} } ``` <h3 align="left"> Links: <a href="https://arxiv.org/abs/2410.08021">ArXiv</a>, <a href="https://neurips.cc/virtual/2024/poster/93378">NeurIPS 2024</a> </h3> ## TODO [//]: # (The code is currently being tidied up, and both the code and model will be made publicly available soon!) All the code and models for this paper have been released! - [x] Release all the checkpoints. - [x] Release the full model code, training and inference code. ## Contents 1. [Introduction](#introduction) 2. [Usage](#usage) 3. [Results](#results) 4. [Contacts](#contacts) 5. [Acknowledgments](#acknowledgments) ## Highlight - **(i) We pioneer the application of mask modeling to referring tasks by introducing a novel paradigm called mask referring modeling.** This paradigm effectively models the referential relation between visual and language. - **(ii) Diverging from previous works, we propose a remarkably concise one-tower framework for grounding and referring segmentation in a unified modality-shared feature space.** Our model eliminates the commonly used modality interaction modules, modality fusion en-/decoders, and special grounding tokens. - **(iii) We extensively validate the effectiveness of OneRef in three referring tasks on five datasets.** Our method consistently surpasses existing approaches and achieves SoTA performance across several settings, providing a valuable new insights for future grounding and referring segmentation research. ## Introduction Constrained by the separate encoding of vision and language, existing grounding and referring segmentation works heavily rely on bulky Transformer-based fusion en-/decoders and a variety of early-stage interaction technologies. Simultaneously, the current mask visual language modeling (MVLM) fails to capture the nuanced referential relationship between image-text in referring tasks. In this paper, we propose **OneRef, a minimalist referring framework built on the modality-shared one-tower transformer that unifies the visual and linguistic feature spaces**. To modeling the referential relationship, we introduce a novel **MVLM paradigm** called **Mask Referring Modeling (MRefM)**, which encompasses both referring-aware mask image modeling and referring-aware mask language modeling. Both modules not only reconstruct modality-related content but also cross-modal referring content. Within MRefM, we propose a referring-aware dynamic image masking strategy that is aware of the referred region rather than relying on fixed ratios or generic random masking schemes. By leveraging the unified visual language feature space and incorporating MRefM’s ability to model the referential relations, our approach enables direct regression of the referring results without resorting to various complex techniques. Our method consistently surpasses existing approaches and achieves SoTA performance on both grounding and segmentation tasks, providing valuable insights for future research. For more details, please refer to [our paper](https://openreview.net/pdf?id=siPdcro6uD). ## Usage ### Dependencies - Python 3.9.10 - PyTorch 2.0.1 - timm 0.6.13 - Check [requirements.txt](requirements.txt) for other dependencies. Our environment is aligned with Beit-3. Besides, our model is **easy to deploy** in a variety of environments and **has been successfully tested** on multiple pytorch versions. ### Image Data Preparation 1.You can download the images from the original source and place them in your disk folder, such as `$/path_to_image_data`: - [MS COCO 2014](download_mscoco2014.sh) (for RefCOCO, RefCOCO+, RefCOCOg dataset, almost 13.0GB) - [ReferItGame](https://drive.google.com/drive/folders/1D4shieeoKly6FswpdjSpaOrxJQNKTyTv) - [Flickr30K Entities](http://shannon.cs.illinois.edu/DenotationGraph/#:~:text=make%20face-,Downloads,-Please%20fill%20in) We provide a script to download the mscoco2014 dataset, you just need to run the script in terminal with the following command: ``` bash download_mscoco2014.sh ``` Or you can also follow the data preparation of TransVG, which can be found in [GETTING_STARTED.md](https://github.com/djiajunustc/TransVG/blob/main/docs/GETTING_STARTED.md). Only the image data in these datasets is used, and these image data is easily find in similar repositories of visual grounding work, such as [TransVG](https://github.com/linhuixiao/TransVG) etc. Finally, the `$/path_to_image_data` folder will have the following structure: ```angular2html |-- image_data |-- Flickr30k |-- flickr30k-images |-- other |-- images |-- mscoco |-- images |-- train2014 |-- referit |-- images ``` - ```$/path_to_image_data/image_data/Flickr30k/flickr30k-images/```: Image data for the Flickr30K dataset, please download from this [link](http://shannon.cs.illinois.edu/DenotationGraph/#:~:text=make%20face-,Downloads,-Please%20fill%20in). Fill the form and download the images. - ```$/path_to_image_data/image_data/other/images/```: Image data for RefCOCO/RefCOCO+/RefCOCOg, i.e., mscoco2014. - ```$/path_to_image_data/image_data/referit/images/```: Image data for ReferItGame. ## Text-Box Anotations The labels in the fully supervised scenario is consistent with previous works such as [CLIP-VG](https://github.com/linhuixiao/CLIP-VG). :star: As we need to conduct pre-training with mixed datasets, we have shuffled the order of the datasets and unified some of the dataset formats. You need to download our text annotation files from the [HuggingFace homepage](https://huggingface.co/xiaolinhui/OneRef/tree/main/text_box_annotation). ### Fully supervised setting <table> <tr> <!-- line 3 --> <th style="text-align:center" > Datasets </th> <th style="text-align:center" > RefCOCO </th> <th style="text-align:center" > RefCOCO+ </th> <th style="text-align:center" > RefCOCOg-g </th> <th style="text-align:center" > RefCOCOg-u </th> <th style="text-align:center" > ReferIt </th> <th style="text-align:center" > Flickr </th> <th style="text-align:center" > mixup_with_refc </th> <th style="text-align:center" > mixup_with_refc_referit </th> </tr> <tr> <!-- line 2 --> <th style="text-align:center" rowspan="1"> url, size </th> <!-- table head --> <th style="text-align:center" colspan="8"> <a href="https://huggingface.co/xiaolinhui/OneRef/tree/main/text_box_annotation">All of six datasets</a>, ~400.0MB </th> <!-- table head --> </tr> </table> \* The `mixup_with_refc` denotes the mixup of the training data from RefCOCO/+/g-umd (without use gref), which used in RES task. The `mixup_with_refc_referit` denotes the mixup of the training data from RefCOCO/+/g (without use gref) and ReferIt Game, which used in REC task. The val and test split of both Mixup1 and Mixup2 are used the val and testA file from RefCOCOg. <u>The training data in RefCOCOg-g (i.e., gref) exist data leakage.</u> Download the above annotations to a disk directory such as `$/path_to_split`; then will have the following similar directory structure: ```angular2html |-- /single_dataset ├── flickr │ ├── flickr_test.pth │ ├── flickr_train.pth │ └── flickr_val.pth ├── gref │ ├── gref_train.pth │ └── gref_val.pth ├── gref_umd │ ├── gref_umd_test.pth │ ├── gref_umd_train.pth │ └── gref_umd_val.pth ├── referit │ ├── referit_test.pth │ ├── referit_train.pth │ └── referit_val.pth ├── unc │ ├── unc_testA.pth │ ├── unc_testB.pth │ ├── unc_train.pth │ └── unc_val.pth └── unc+ ├── unc+_testA.pth ├── unc+_testB.pth ├── unc+_train.pth └── unc+_val.pth |-- /mixup_with_refc ├── mixup │ ├── mixup_test.pth │ ├── mixup_train.pth │ └── mixup_val.pth |-- /mixup_with_refc_referit ├── mixup │ ├── mixup_test.pth │ ├── mixup_train.pth │ └── mixup_val.pth ``` ## Pre-trained Checkpoints The checkpoints include the <font color=Red>**Base model**</font> and <font color=Red>**Large model**</font> under the <font color=Red>**single-dataset fine-tuning**</font> setting and <font color=Red>**dataset-mixed grounding pretraining**</font> setting with **Both** <font color=Red>**REC**</font> and <font color=Red>**RES**</font> tasks. **<font color=Orange>It should be noted that OneRef involves 29 models with a total size of 125GB, and we have made all of these 29 models open source</font>. We ensure that these models can reproduce the results in the paper. If these models fail to reproduce the results or encounter errors, please contact us promptly via email or by raising an issue. We will check and upload the correct models. This might be due to model upload errors or model corruption during disk storage. After all, we trained nearly a hundred models during the research course of this work.** <a href="https://huggingface.co/xiaolinhui/OneRef/tree/main"><picture><source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/lobehub/lobe-icons/refs/heads/master/packages/static-png/dark/huggingface-color.png" /><img height="36px" width="36px" src="https://raw.githubusercontent.com/lobehub/lobe-icons/refs/heads/master/packages/static-png/light/huggingface-color.png" /></picture><br/>HuggingFace: All the models are publicly available on the [**OneRef Huggingface homepage**](https://huggingface.co/xiaolinhui/OneRef/tree/main). You can freely download the corresponding models on this website. ### REC task: Single-dataset fine-tuning checkpoints download <table> <tr> <!-- line 3 --> <th style="text-align:center" > Datasets </th> <th style="text-align:center" > RefCOCO </th> <th style="text-align:center" > RefCOCO+ </th> <th style="text-align:center" > RefCOCOg-u </th> <th style="text-align:center" > ReferIt </th> <th style="text-align:center" > Flickr </th> </tr> <tr> <!-- line 2 --> <th style="text-align:center" rowspan="1"> Base model </th> <!-- table head --> <th style="text-align:center" colspan="6"> <a href="https://drive.google.com/file/d/1C64fr7X-snTvZ8MlccPmYZOlkAOhlYpF/view?usp=drive_link"> Google Drive, rec_single_dataset_finetuning_base.zip (for all), ~9.0 GB </a> </th> <!-- table head --> </tr> <tr> <!-- line 2 --> <th style="text-align:center" rowspan="1"> Base model </th> <!-- table head --> <th style="text-align:center" colspan="6"> <a href="https://huggingface.co/xiaolinhui/OneRef/blob/main/rec_single_dataset_finetuning_base.zip"> Hugging Face, rec_single_dataset_finetuning_base.zip (for all), ~9.0 GB </a> </th> <!-- table head --> </tr> <tr> <!-- line 2 --> <th style="text-align:center" rowspan="1"> Large model </th> <!-- table head --> <th style="text-align:center" colspan="1"> <a href="https://huggingface.co/xiaolinhui/OneRef/blob/main/rec_single_dataset_finetuning_large_unc.pth">finetuning_large_unc, ~8.0 GB </a> </th> <!-- table head --> <th style="text-align:center" colspan="1"> <a href="https://huggingface.co/xiaolinhui/OneRef/blob/main/rec_single_dataset_finetuning_large_unc%2B.pth">finetuning_large_unc+, ~8.0 GB </a> </th> <!-- table head --> <th style="text-align:center" colspan="1"> <a href="https://huggingface.co/xiaolinhui/OneRef/blob/main/rec_single_dataset_finetuning_large_gref_umd.pth">finetuning_large_gref_umd, ~8.0 GB </a> </th> <!-- table head --> <th style="text-align:center" colspan="1"> <a href="https://huggingface.co/xiaolinhui/OneRef/blob/main/rec_single_dataset_finetuning_large_referit.pth">finetuning_large_referit, ~8.0 GB </a> </th> <!-- table head --> <th style="text-align:center" colspan="1"> <a href="https://huggingface.co/xiaolinhui/OneRef/blob/main/rec_single_dataset_finetuning_large_flickr.pth">finetuning_large_flickr, ~8.0 GB </a> </th> <!-- table head --> </tr> </table> ### REC task: Mixup grounding pre-training checkpoints download <table> <tr> <!-- line 3 --> <th style="text-align:center" > Datasets </th> <th style="text-align:center" > Mixup (RefCOCO/+/g) </th> <th style="text-align:center" > ReferIt </th> <th style="text-align:center" > Flickr </th> </tr> <tr> <!-- line 2 --> <th style="text-align:center" rowspan="1"> base model </th> <!-- table head --> <th style="text-align:center" colspan="3"> <a href="https://huggingface.co/xiaolinhui/OneRef/blob/main/rec_mixup_grounding_pretraining_base.zip">rec_mixup_grounding_pretraining_base.zip, ~6.0 GB </a> </th> <!-- table head --> </tr> <tr> <!-- line 3 --> <th style="text-align:center" > Large model </th> <th style="text-align:center" > <a href="https://huggingface.co/xiaolinhui/OneRef/blob/main/rec_mixup_grounding_pretraining_large_unc%2Bg.pth">mixup_pretraining_large_unc+g, ~8.0 GB</a> </th> <th style="text-align:center" > <a href="https://huggingface.co/xiaolinhui/OneRef/blob/main/rec_mixup_grounding_pretraining_large_referit.pth">mixup_pretraining_large_referit, ~8.0 GB</a> </th> <th style="text-align:center" > <a href="https://huggingface.co/xiaolinhui/OneRef/blob/main/rec_mixup_grounding_pretraining_large_flickr.pth">mixup_pretraining_large_flickr, ~8.0 GB</a> </th> </tr> </table> ### REC task: Ultimate performance prediction in our [Grounding Survey paper](https://arxiv.org/pdf/2412.20206) <table> <tr> <!-- line 3 --> <th style="text-align:center" > Datasets </th> <th style="text-align:center" > Mixup (RefCOCO/+/g) </th> </tr> <tr> <!-- line 2 --> <th style="text-align:center" rowspan="1"> base model </th> <!-- table head --> <th style="text-align:center" colspan="1"> <a href="https://huggingface.co/xiaolinhui/OneRef/blob/main/rec_mixup_grounding_ultimate_performance_base_in_the_survey.zip">rec_mixup_grounding_ultimate_performance_base.zip, ~6.0 GB </a> </th> <!-- table head --> </tr> <tr> <!-- line 3 --> <th style="text-align:center" > Large model </th> <th style="text-align:center" > <a href="">rec_mixup_grounding_ultimate_performance_large, ~8.0 GB</a> </th> </tr> </table> ### RES task: Single-dataset fine-tuning checkpoints download <table> <tr> <!-- line 3 --> <th style="text-align:center" > Datasets </th> <th style="text-align:center" > RefCOCO </th> <th style="text-align:center" > RefCOCO+ </th> <th style="text-align:center" > RefCOCOg-u </th> </tr> <tr> <!-- line 2 --> <th style="text-align:center" rowspan="1"> base model </th> <!-- table head --> <th style="text-align:center" colspan="3"> <a href="https://huggingface.co/xiaolinhui/OneRef/blob/main/res_single_dataset_finetuning_base.zip"> res_single_dataset_finetuning_base.zip, ~6.0 GB </a> </th> <!-- table head --> </tr> <tr> <!-- line 2 --> <th style="text-align:center" rowspan="1"> Large model </th> <!-- table head --> <th style="text-align:center" colspan="1"> <a href="https://huggingface.co/xiaolinhui/OneRef/blob/main/res_single_dataset_finetuning_large_unc.pth">finetuning_large_unc, ~8.0 GB </a> </th> <!-- table head --> <th style="text-align:center" colspan="1"> <a href="https://huggingface.co/xiaolinhui/OneRef/blob/main/res_single_dataset_finetuning_large_unc%2B.pth">finetuning_large_unc+, ~8.0 GB </a> </th> <!-- table head --> <th style="text-align:center" colspan="1"> <a href="https://huggingface.co/xiaolinhui/OneRef/blob/main/res_single_dataset_finetuning_large_gref_umd.pth">finetuning_large_gref_umd, ~8.0 GB </a> </th> <!-- table head --> </tr> </table> ### RES task: Mixup grounding pre-training checkpoints download <table> <tr> <!-- line 3 --> <th style="text-align:center" > Datasets </th> <th style="text-align:center" > Mixup (RefCOCO/+/g) </th> </tr> <tr> <!-- line 2 --> <th style="text-align:center" rowspan="1"> base model </th> <!-- table head --> <th style="text-align:center" colspan="1"> <a href="https://huggingface.co/xiaolinhui/OneRef/blob/main/res_mixup_grounding_pretraining_base.zip">res_mixup_pretraining_base.zip, ~1.0 GB </a> </th> <!-- table head --> </tr> <tr> <!-- line 3 --> <th style="text-align:center" > Large model </th> <th style="text-align:center" > <a href="https://huggingface.co/xiaolinhui/OneRef/blob/main/res_mixup_grounding_pretraining_large_unc_%2B_g.pth">res_mixup_pretraining_large, ~2.0 GB</a> </th> </tr> </table> After downloading all of these checkpoints, you can save them in the following directory, allowing you to train and test the five datasets at once and just using a single script. ```angular2html |-- /finetuning_checkpoints (base or large model, rec or res task) ├── flickr │ └── best_checkpoint.pth ├── gref_umd │ └── best_checkpoint.pth ├── referit │ └── best_checkpoint.pth ├── unc │ └── best_checkpoint.pth └── unc+ └── best_checkpoint.pth |-- /mixup_grounding_pretraining (base or large model, rec or res task) └── mixup └── best_checkpoint.pth ``` ### MRefM pretrained backbone checkpoints download We propose our multimodal **Mask Referring Modeling (MRefM)** paradigm to enhance the model's referring comprehension ability. Since MRefM aims to improve its general referring comprehension ability through pre-training, it mainly demonstrates its performance gain under the mixed pre-training setting. In the experiment, the MRefM pre-training **for the REC task** is mainly carried out through a mixture of the RefCOCO/+/g (short as RefC) and ReferIt datasets. To ensure a fair comparison, the MRefM pre-training **for the RES task** is mainly carried out through a mixture of the RefC datasets. For MRefM pre-training, the base model took 15 hours on 32 NVIDIA A100 GPUs, while the large model took 50 hours on the same number of GPUs. We provide the MRefM pre-trained checkpoints at the following: All model are placed in [HuggingFace Page](https://huggingface.co/xiaolinhui/OneRef/tree/main) <table> <tr> <!-- line 3 --> <th style="text-align:center" > MRefM Model for REC </th> <th style="text-align:center" > Pretraining dataset </th> <th style="text-align:center" > Checkpoints </th> </tr> <tr> <!-- line 2 --> <th style="text-align:center" rowspan="1"> Base model </th> <!-- table head --> <th style="text-align:center" rowspan="1"> RefC,ReferIt </th> <!-- table head --> <th style="text-align:center" colspan="1"> <a href="https://huggingface.co/xiaolinhui/OneRef/blob/main/mrefm_pretrain_patch16_384/rec_mrefm_pretrain_base_patch16_384.pth">rec_mrefm_base_patch16_384, ~2 GB </a> </th> <!-- table head --> </tr> <tr> <!-- line 3 --> <th style="text-align:center" > Large model </th> <th style="text-align:center" rowspan="1"> RefC,ReferIt </th> <!-- table head --> <th style="text-align:center" > <a href="https://huggingface.co/xiaolinhui/OneRef/blob/main/mrefm_pretrain_patch16_384/rec_mrefm_pretrain_large_patch16_384.pth">rec_mrefm_large_patch16_384, ~7 GB</a> </th> </tr> </table> <table> <tr> <!-- line 3 --> <th style="text-align:center" > MRefM Model for RES </th> <th style="text-align:center" > Pretraining dataset </th> <th style="text-align:center" > Checkpoints </th> </tr> <tr> <!-- line 2 --> <th style="text-align:center" rowspan="1"> Base model </th> <!-- table head --> <th style="text-align:center" > RefC </th> <th style="text-align:center" colspan="1"> <a href="https://huggingface.co/xiaolinhui/OneRef/blob/main/mrefm_pretrain_patch16_384/res_mrefm_pretrain_base_patch16_384.pth">res_mrefm_base_patch16_384, ~2 GB </a> </th> <!-- table head --> </tr> <tr> <!-- line 3 --> <th style="text-align:center" > Large model </th> <th style="text-align:center" > RefC </th> <th style="text-align:center" > <a href="https://huggingface.co/xiaolinhui/OneRef/blob/main/mrefm_pretrain_patch16_384/res_mrefm_pretrain_large_patch16_384.pth">res_mrefm_base_patch16_384, ~7 GB</a> </th> </tr> </table> ### Original BEiT-3 checkpoints download In order to facilitate the reproducibility of the MRefM pre-training results and to achieve transferability in non-MRefM settings, we also provide the original BEiT-3 model as follows. You can download it from the table below or from the [BEiT-3 official repository](https://github.com/microsoft/unilm/tree/master/beit3). <table> <tr> <!-- line 3 --> <th style="text-align:center" > BEiT-3 original model </th> <th style="text-align:center" > Checkpoints </th> </tr> <tr> <!-- line 2 --> <th style="text-align:center" rowspan="1"> Sentencepiece model (Tokenizer) </th> <!-- table head --> <th style="text-align:center" colspan="1"> <a href="https://huggingface.co/xiaolinhui/OneRef/blob/main/beit3_checkpoints/beit3.spm">sp3 Sentencepiece model, 1 MB </a> </th> <!-- table head --> </tr> <tr> <!-- line 2 --> <th style="text-align:center" rowspan="1"> MIM VQKD model </th> <!-- table head --> <th style="text-align:center" colspan="1"> <a href="https://huggingface.co/xiaolinhui/OneRef/blob/main/beit3_checkpoints/vqkd_encoder_base_decoder_3x768x12_clip-d5036aa7.pth">vqkd model, 438 MB </a> </th> <!-- table head --> </tr> <tr> <!-- line 2 --> <th style="text-align:center" rowspan="1"> BEiT-3 Base model </th> <!-- table head --> <th style="text-align:center" colspan="1"> <a href="https://huggingface.co/xiaolinhui/OneRef/blob/main/beit3_checkpoints/beit3_base_indomain_patch16_224.pth">beit3_base_indomain_patch16_224, 554 MB </a> </th> <!-- table head --> </tr> <tr> <!-- line 3 --> <th style="text-align:center" > BEiT-3 Large model </th> <th style="text-align:center" > <a href="https://huggingface.co/xiaolinhui/OneRef/blob/main/beit3_checkpoints/beit3_large_indomain_patch16_224.pth">beit3_large_indomain_patch16_224, 1.5 GB</a> </th> </tr> </table> ## REC and RES Transfer Training and Evaluation As shown below, we have provided complete evaluation, training, and pre-training scripts in the [train_and_eval_script](train_and_eval_script). ```angular2html train_and_eval_script ├── eval_rec_mixup_grounding_pretraining_base.sh ├── eval_rec_mixup_grounding_pretraining_large.sh ├── eval_rec_single_dataset_finetuning_base.sh ├── eval_rec_single_dataset_finetuning_large.sh ├── eval_res_mixup_grounding_pretraining_base.sh ├── eval_res_mixup_grounding_pretraining_large.sh ├── eval_res_single_dataset_finetuning_base.sh ├── eval_res_single_dataset_finetuning_large.sh ├── MRefM_pretraining │   ├── rec_mrefm_pretraining_base.sh │   ├── rec_mrefm_pretraining_large.sh │   ├── res_mrefm_pretraining_base.sh │   └── res_mrefm_pretraining_large.sh ├── submit_for_multi_node_pretraining │   ├── get_master_ip.sh │   ├── master_ip.sh │   └── train_and_eval_for_multi_node.sh ├── train_rec_mixup_grounding_pretraining_base.sh ├── train_rec_mixup_grounding_pretraining_large.sh ├── train_rec_single_dataset_finetuning_base.sh ├── train_rec_single_dataset_finetuning_large.sh ├── train_res_mixup_grounding_pretraining_base.sh ├── train_res_mixup_grounding_pretraining_large.sh ├── train_res_single_dataset_finetuning_base.sh └── train_res_single_dataset_finetuning_large.sh ``` You only need to modify the corresponding paths (change ```$/path_to_split```, ``` $/path_to_image_data```, ``` $/path_to_output``` to your own file directory), and then execute the corresponding scripts with the ```bash``` command to test and train the relevant models. 1. Training on RefCOCO with single dataset finetuning setting. ``` CUDA_VISIBLE_DEVICES=3,4,5,6,7 python -m torch.distributed.launch --nproc_per_node=5 --master_port 28887 --use_env train_clip_vg.py --num_workers 32 --epochs 120 --batch_size 64 --lr 0.00025 --lr_scheduler cosine --aug_crop --aug_scale --aug_translate --imsize 224 --max_query_len 77 --sup_type full --dataset unc --data_root $/path_to_image_data --split_root $/path_to_split --output_dir $/path_to_output/output_v01/unc; ``` Please refer to [train_and_eval_script/train_rec_single_dataset_finetuning_base.sh](train_and_eval_script/train_rec_single_dataset_finetuning_base.sh) for training commands on other datasets. 2. Evaluation on RefCOCO. ``` CUDA_VISIBLE_DEVICES=2,3,4,5,6,7 python -m torch.distributed.launch --nproc_per_node=6 --master_port 28888 --use_env eval.py --num_workers 2 --batch_size 128 --dataset unc --imsize 224 --max_query_len 77 --data_root $/path_to_image_data --split_root $/path_to_split --eval_model $/path_to_output/output_v01/unc/best_checkpoint.pth --eval_set val --output_dir $/path_to_output/output_v01/unc; ``` Please refer to [train_and_eval_script/eval_rec_single_dataset_finetuning_base.sh](train_and_eval_script/eval_rec_single_dataset_finetuning_base.sh) for evaluation commands on other splits or datasets. 3. We strongly recommend to use the ```bash``` commands to training or testing with different datasets and splits, which will significant reduce the training workforce. such as: ``` bash train_and_eval_script/train_rec_single_dataset_finetuning_base.sh ``` **It should be noted that**, due to the limited number of data samples in the single-dataset setting, MRefM did not yield significant improvements in performance. To streamline the training process and facilitate the reproducibility of our work, we provide a training process without MRefM pre-training specifically for the single-dataset scenario. ## MRefM Pre-training ### 1. One-node Pre-training Single-node means that only one multi-card server is needed. You just need to run the following command. This training is not much different from the fine-tuning training. CUDA_VISIBLE_DEVICES=3,4,5,6,7 python -m torch.distributed.launch --nproc_per_node=5 --master_port 28887 --use_env train_clip_vg.py --num_workers 32 --epochs 120 --batch_size 64 --lr 0.00025 --lr_scheduler cosine --aug_crop --aug_scale --aug_translate --imsize 224 --max_query_len 77 --sup_type full --dataset unc --data_root $/path_to_image_data --split_root $/path_to_split --output_dir $/path_to_output/output_v01/unc; Or using the bash command as follows: ```angular2html bash train_and_eval_script/MRefM_pretraining/rec_mrefm_pretraining_base.sh ``` ### 2. Multi-node Pre-training Multi-node training means that multiple multi-card servers are required. You need to use the scripts in the [train_and_eval_script/submit_for_multi_node_pretraining](train_and_eval_script/submit_for_multi_node_pretraining) directory to start the process on multiple servers. For detailed operations, you can refer to the relevant tutorials. ## Results ### 1. REC task <details open> <summary><font size="4"> REC Single-dataset Fine-tuning SoTA Result Table </font></summary> <img src="docs/tab1.jpg" alt="COCO" width="100%"> </details> <details open> <summary><font size="4"> REC Dataset-mixed Pretraining SoTA Result Table </font></summary> <img src="docs/tab2.jpg" alt="COCO" width="100%"> </details> ### 2. RES task <details open> <summary><font size="4"> RES Single-dataset Fine-tuning and Dataset-mixed Pretraining SoTA Result Table (mIoU) </font></summary> <img src="docs/tab3.jpg" alt="COCO" width="100%"> </details> <details open> <summary><font size="4"> RES Single-dataset Fine-tuning and Dataset-mixed Pretraining SoTA Result Table (oIoU) </font></summary> <img src="docs/tab4.jpg" alt="COCO" width="100%"> </details> ### 3. Our model also has significant energy efficiency advantages. <details open> <summary><font size="4"> Comparison of the computational cost in REC task. </font></summary> <div align=center> <img src="docs/tab5.jpg" alt="COCO" width="70%"></div> </details> ## Methods <p align="center"> <img src='docs/fig2.jpg' align="center" width="100%"> </p> **<p align="center"> An Illustration of our multimodal Mask Referring Modeling (MRefM) paradigm, which includes Referring-aware mask image modeling and Referring-aware mask language modeling. </p>** <p align="center"> <img src='docs/fig3.jpg' align="center" width="100%"> </p> **<p align="center"> An Illustration of the referring-based grounding and segmentation transfer. </p>** <p align="center"> <img src='docs/fig4.jpg' align="center" width="100%"> </p> **<p align="center"> Illustrations of random masking (MAE) [27], block-wise masking (BEiT) [4], and our referring-aware dynamic masking. α denotes the entire masking ratio, while β and γ denote the masking ratio beyond and within the referred region. </p>** ## Visualization <p align="center"> <img src='docs/fig6.jpg' align="center" width="80%"> </p> **<p align="center"> Qualitative results on the RefCOCO-val dataset. </p>** <p align="center"> <img src='docs/fig7.jpg' align="center" width="80%"> </p> **<p align="center"> Qualitative results on the RefCOCO+-val dataset. </p>** <p align="center"> <img src='docs/fig8.jpg' align="center" width="80%"> </p> **<p align="center"> Qualitative results on the RefCOCOg-val dataset. </p>** Each example shows two different query texts. From left to right: the original input image, the ground truth with box and segmentation mask (in green), the RES prediction of OneRef (in cyan), the REC prediction of OneRef (in cyan), and the cross-modal feature. ## Contacts Email: <xiaolinhui16@mails.ucas.ac.cn>. Any kind discussions are welcomed! ## Acknowledgement Our model is related to [BEiT-3](https://github.com/microsoft/unilm/tree/master/beit3) and [MAE](https://github.com/facebookresearch/mae). Thanks for their great work! We also thank the great previous work including [TransVG](https://github.com/linhuixiao/TransVG), [DETR](https://github.com/facebookresearch/detr), [CLIP](https://github.com/openai/CLIP), [CLIP-VG](https://github.com/linhuixiao/CLIP-VG), etc. Thanks [Microsoft](https://github.com/microsoft/unilm) for their awesome models. ## Star History [![Star History Chart](https://api.star-history.com/svg?repos=linhuixiao/OneRef&type=Date)](https://star-history.com/#linhuixiao/OneRef&Date)
HUY2612/wav2vec
HUY2612
2025-08-13T11:03:08Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2025-08-13T08:27:12Z
--- license: apache-2.0 ---
vengky/blockassist-bc-wild_gentle_manatee_1755079858
vengky
2025-08-13T11:02:53Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "wild gentle manatee", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T11:02:37Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - wild gentle manatee --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
donoway/TinyStoriesV2_Llama-3.2-1B-urpkj4v3
donoway
2025-08-13T11:02:52Z
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:meta-llama/Llama-3.2-1B", "base_model:adapter:meta-llama/Llama-3.2-1B", "license:llama3.2", "region:us" ]
null
2025-08-13T11:02:47Z
--- library_name: peft license: llama3.2 base_model: meta-llama/Llama-3.2-1B tags: - generated_from_trainer model-index: - name: TinyStoriesV2_Llama-3.2-1B-urpkj4v3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # TinyStoriesV2_Llama-3.2-1B-urpkj4v3 This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1525 - Model Preparation Time: 0.0176 - Token Accuracy: 0.6754 - Token Error Rate: 0.3246 - Perplexity: 3.1658 - Summed Loss: 3043180.2385 - Total Tokens: 2640694 - Exp Loss: 1.1524 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 32 - eval_batch_size: 112 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 1e-05 - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Token Accuracy | Token Error Rate | Perplexity | Summed Loss | Total Tokens | Exp Loss | |:-------------:|:------:|:-----:|:---------------:|:----------------------:|:--------------:|:----------------:|:----------:|:------------:|:------------:|:--------:| | No log | 0 | 0 | 1.7513 | 0.0176 | 0.5691 | 0.4309 | 5.7590 | 4623224.4282 | 2640694 | 1.7508 | | 1.2569 | 0.0012 | 100 | 1.3040 | 0.0176 | 0.6464 | 0.3536 | 3.6837 | 3443235.6548 | 2640694 | 1.3039 | | 1.2261 | 0.0024 | 200 | 1.2750 | 0.0176 | 0.6524 | 0.3476 | 3.5783 | 3366557.6448 | 2640694 | 1.2749 | | 1.2114 | 0.0036 | 300 | 1.2554 | 0.0176 | 0.6547 | 0.3453 | 3.5090 | 3314921.4094 | 2640694 | 1.2553 | | 1.2036 | 0.0049 | 400 | 1.2264 | 0.0176 | 0.6600 | 0.3400 | 3.4087 | 3238384.3721 | 2640694 | 1.2263 | | 1.2383 | 0.0061 | 500 | 1.2199 | 0.0176 | 0.6609 | 0.3391 | 3.3866 | 3221156.6045 | 2640694 | 1.2198 | | 1.2068 | 0.0073 | 600 | 1.2146 | 0.0176 | 0.6617 | 0.3383 | 3.3688 | 3207234.0923 | 2640694 | 1.2145 | | 1.2313 | 0.0085 | 700 | 1.2100 | 0.0176 | 0.6627 | 0.3373 | 3.3534 | 3195186.1646 | 2640694 | 1.2100 | | 1.2401 | 0.0097 | 800 | 1.2066 | 0.0176 | 0.6637 | 0.3363 | 3.3418 | 3185990.6555 | 2640694 | 1.2065 | | 1.1631 | 0.0109 | 900 | 1.2024 | 0.0176 | 0.6643 | 0.3357 | 3.3279 | 3175034.1326 | 2640694 | 1.2023 | | 1.2011 | 0.0122 | 1000 | 1.1990 | 0.0176 | 0.6653 | 0.3347 | 3.3166 | 3166009.6125 | 2640694 | 1.1989 | | 1.1859 | 0.0134 | 1100 | 1.1972 | 0.0176 | 0.6657 | 0.3343 | 3.3106 | 3161229.3738 | 2640694 | 1.1971 | | 1.1366 | 0.0146 | 1200 | 1.1956 | 0.0176 | 0.6656 | 0.3344 | 3.3052 | 3156940.1401 | 2640694 | 1.1955 | | 1.1676 | 0.0158 | 1300 | 1.1928 | 0.0176 | 0.6665 | 0.3335 | 3.2960 | 3149589.8091 | 2640694 | 1.1927 | | 1.2417 | 0.0170 | 1400 | 1.1924 | 0.0176 | 0.6669 | 0.3331 | 3.2948 | 3148580.7996 | 2640694 | 1.1923 | | 1.1751 | 0.0182 | 1500 | 1.1916 | 0.0176 | 0.6665 | 0.3335 | 3.2923 | 3146604.5005 | 2640694 | 1.1916 | | 1.2389 | 0.0194 | 1600 | 1.1879 | 0.0176 | 0.6677 | 0.3323 | 3.2799 | 3136617.3774 | 2640694 | 1.1878 | | 1.2219 | 0.0207 | 1700 | 1.1885 | 0.0176 | 0.6673 | 0.3327 | 3.2820 | 3138359.9397 | 2640694 | 1.1885 | | 1.1823 | 0.0219 | 1800 | 1.1865 | 0.0176 | 0.6678 | 0.3322 | 3.2754 | 3133046.6260 | 2640694 | 1.1864 | | 1.1683 | 0.0231 | 1900 | 1.1866 | 0.0176 | 0.6676 | 0.3324 | 3.2757 | 3133291.0630 | 2640694 | 1.1865 | | 1.2647 | 0.0243 | 2000 | 1.1845 | 0.0176 | 0.6680 | 0.3320 | 3.2688 | 3127666.5503 | 2640694 | 1.1844 | | 1.124 | 0.0255 | 2100 | 1.1831 | 0.0176 | 0.6684 | 0.3316 | 3.2642 | 3124010.0591 | 2640694 | 1.1830 | | 1.2189 | 0.0267 | 2200 | 1.1851 | 0.0176 | 0.6680 | 0.3320 | 3.2707 | 3129192.8560 | 2640694 | 1.1850 | | 1.1759 | 0.0280 | 2300 | 1.1826 | 0.0176 | 0.6685 | 0.3315 | 3.2626 | 3122695.3787 | 2640694 | 1.1825 | | 1.2055 | 0.0292 | 2400 | 1.1805 | 0.0176 | 0.6692 | 0.3308 | 3.2559 | 3117252.1963 | 2640694 | 1.1805 | | 1.139 | 0.0304 | 2500 | 1.1784 | 0.0176 | 0.6694 | 0.3306 | 3.2489 | 3111546.7966 | 2640694 | 1.1783 | | 1.1467 | 0.0316 | 2600 | 1.1814 | 0.0176 | 0.6687 | 0.3313 | 3.2587 | 3119498.2261 | 2640694 | 1.1813 | | 1.1233 | 0.0328 | 2700 | 1.1801 | 0.0176 | 0.6692 | 0.3308 | 3.2544 | 3116065.2009 | 2640694 | 1.1800 | | 1.1567 | 0.0340 | 2800 | 1.1788 | 0.0176 | 0.6695 | 0.3305 | 3.2504 | 3112777.7595 | 2640694 | 1.1788 | | 1.2171 | 0.0352 | 2900 | 1.1770 | 0.0176 | 0.6699 | 0.3301 | 3.2443 | 3107805.8584 | 2640694 | 1.1769 | | 1.2191 | 0.0365 | 3000 | 1.1836 | 0.0176 | 0.6684 | 0.3316 | 3.2661 | 3125493.2979 | 2640694 | 1.1836 | | 1.1921 | 0.0377 | 3100 | 1.1766 | 0.0176 | 0.6699 | 0.3301 | 3.2432 | 3106941.3726 | 2640694 | 1.1766 | | 1.2471 | 0.0389 | 3200 | 1.1762 | 0.0176 | 0.6700 | 0.3300 | 3.2420 | 3105919.1436 | 2640694 | 1.1762 | | 1.1774 | 0.0401 | 3300 | 1.1761 | 0.0176 | 0.6704 | 0.3296 | 3.2414 | 3105510.8982 | 2640694 | 1.1760 | | 1.268 | 0.0413 | 3400 | 1.1749 | 0.0176 | 0.6704 | 0.3296 | 3.2376 | 3102384.7751 | 2640694 | 1.1748 | | 1.1824 | 0.0425 | 3500 | 1.1763 | 0.0176 | 0.6701 | 0.3299 | 3.2420 | 3105942.4053 | 2640694 | 1.1762 | | 1.1595 | 0.0438 | 3600 | 1.1744 | 0.0176 | 0.6705 | 0.3295 | 3.2359 | 3100996.8003 | 2640694 | 1.1743 | | 1.2102 | 0.0450 | 3700 | 1.1751 | 0.0176 | 0.6704 | 0.3296 | 3.2383 | 3102954.3594 | 2640694 | 1.1751 | | 1.1465 | 0.0462 | 3800 | 1.1721 | 0.0176 | 0.6712 | 0.3288 | 3.2286 | 3095061.9246 | 2640694 | 1.1721 | | 1.0562 | 0.0474 | 3900 | 1.1736 | 0.0176 | 0.6707 | 0.3293 | 3.2333 | 3098823.6599 | 2640694 | 1.1735 | | 1.1901 | 0.0486 | 4000 | 1.1750 | 0.0176 | 0.6702 | 0.3298 | 3.2378 | 3102574.5845 | 2640694 | 1.1749 | | 1.1493 | 0.0498 | 4100 | 1.1715 | 0.0176 | 0.6713 | 0.3287 | 3.2266 | 3093399.3740 | 2640694 | 1.1714 | | 1.1442 | 0.0510 | 4200 | 1.1731 | 0.0176 | 0.6709 | 0.3291 | 3.2317 | 3097584.2344 | 2640694 | 1.1730 | | 1.1501 | 0.0523 | 4300 | 1.1725 | 0.0176 | 0.6710 | 0.3290 | 3.2300 | 3096152.4263 | 2640694 | 1.1725 | | 1.1754 | 0.0535 | 4400 | 1.1717 | 0.0176 | 0.6708 | 0.3292 | 3.2274 | 3094042.4983 | 2640694 | 1.1717 | | 1.2584 | 0.0547 | 4500 | 1.1710 | 0.0176 | 0.6713 | 0.3287 | 3.2251 | 3092182.4751 | 2640694 | 1.1710 | | 1.1023 | 0.0559 | 4600 | 1.1706 | 0.0176 | 0.6714 | 0.3286 | 3.2239 | 3091156.1873 | 2640694 | 1.1706 | | 1.2072 | 0.0571 | 4700 | 1.1707 | 0.0176 | 0.6713 | 0.3287 | 3.2239 | 3091165.0820 | 2640694 | 1.1706 | | 1.0966 | 0.0583 | 4800 | 1.1704 | 0.0176 | 0.6715 | 0.3285 | 3.2232 | 3090633.5090 | 2640694 | 1.1704 | | 1.1564 | 0.0596 | 4900 | 1.1681 | 0.0176 | 0.6720 | 0.3280 | 3.2158 | 3084545.1221 | 2640694 | 1.1681 | | 1.1508 | 0.0608 | 5000 | 1.1700 | 0.0176 | 0.6716 | 0.3284 | 3.2216 | 3089320.7778 | 2640694 | 1.1699 | | 1.2093 | 0.0620 | 5100 | 1.1719 | 0.0176 | 0.6712 | 0.3288 | 3.2278 | 3094341.8901 | 2640694 | 1.1718 | | 1.2239 | 0.0632 | 5200 | 1.1705 | 0.0176 | 0.6714 | 0.3286 | 3.2232 | 3090638.0591 | 2640694 | 1.1704 | | 1.1233 | 0.0644 | 5300 | 1.1670 | 0.0176 | 0.6724 | 0.3276 | 3.2121 | 3081504.7913 | 2640694 | 1.1669 | | 1.2494 | 0.0656 | 5400 | 1.1701 | 0.0176 | 0.6717 | 0.3283 | 3.2222 | 3089783.9871 | 2640694 | 1.1701 | | 1.0811 | 0.0668 | 5500 | 1.1670 | 0.0176 | 0.6725 | 0.3275 | 3.2120 | 3081434.5095 | 2640694 | 1.1669 | | 1.152 | 0.0681 | 5600 | 1.1681 | 0.0176 | 0.6721 | 0.3279 | 3.2158 | 3084538.5535 | 2640694 | 1.1681 | | 1.1712 | 0.0693 | 5700 | 1.1692 | 0.0176 | 0.6719 | 0.3281 | 3.2191 | 3087215.9036 | 2640694 | 1.1691 | | 1.1205 | 0.0705 | 5800 | 1.1686 | 0.0176 | 0.6718 | 0.3282 | 3.2174 | 3085859.4868 | 2640694 | 1.1686 | | 1.2231 | 0.0717 | 5900 | 1.1673 | 0.0176 | 0.6723 | 0.3277 | 3.2133 | 3082456.3384 | 2640694 | 1.1673 | | 1.1367 | 0.0729 | 6000 | 1.1680 | 0.0176 | 0.6717 | 0.3283 | 3.2153 | 3084084.9756 | 2640694 | 1.1679 | | 1.1512 | 0.0741 | 6100 | 1.1666 | 0.0176 | 0.6722 | 0.3278 | 3.2109 | 3080526.5339 | 2640694 | 1.1666 | | 1.1608 | 0.0754 | 6200 | 1.1675 | 0.0176 | 0.6722 | 0.3278 | 3.2137 | 3082814.7917 | 2640694 | 1.1674 | | 1.1773 | 0.0766 | 6300 | 1.1686 | 0.0176 | 0.6717 | 0.3283 | 3.2172 | 3085714.2290 | 2640694 | 1.1685 | | 1.2041 | 0.0778 | 6400 | 1.1665 | 0.0176 | 0.6723 | 0.3277 | 3.2107 | 3080344.3835 | 2640694 | 1.1665 | | 1.1597 | 0.0790 | 6500 | 1.1672 | 0.0176 | 0.6721 | 0.3279 | 3.2127 | 3082009.5845 | 2640694 | 1.1671 | | 1.219 | 0.0802 | 6600 | 1.1656 | 0.0176 | 0.6723 | 0.3277 | 3.2078 | 3077931.4578 | 2640694 | 1.1656 | | 1.1578 | 0.0814 | 6700 | 1.1687 | 0.0176 | 0.6718 | 0.3282 | 3.2176 | 3086027.1304 | 2640694 | 1.1686 | | 1.1535 | 0.0826 | 6800 | 1.1666 | 0.0176 | 0.6723 | 0.3277 | 3.2109 | 3080491.3015 | 2640694 | 1.1665 | | 1.2193 | 0.0839 | 6900 | 1.1667 | 0.0176 | 0.6726 | 0.3274 | 3.2111 | 3080646.9297 | 2640694 | 1.1666 | | 1.1603 | 0.0851 | 7000 | 1.1671 | 0.0176 | 0.6724 | 0.3276 | 3.2124 | 3081732.2048 | 2640694 | 1.1670 | | 1.1779 | 0.0863 | 7100 | 1.1661 | 0.0176 | 0.6722 | 0.3278 | 3.2094 | 3079283.9666 | 2640694 | 1.1661 | | 1.2216 | 0.0875 | 7200 | 1.1650 | 0.0176 | 0.6729 | 0.3271 | 3.2057 | 3076246.2913 | 2640694 | 1.1649 | | 1.1887 | 0.0887 | 7300 | 1.1651 | 0.0176 | 0.6728 | 0.3272 | 3.2060 | 3076511.1555 | 2640694 | 1.1650 | | 1.1503 | 0.0899 | 7400 | 1.1652 | 0.0176 | 0.6726 | 0.3274 | 3.2063 | 3076707.5808 | 2640694 | 1.1651 | | 1.1457 | 0.0912 | 7500 | 1.1637 | 0.0176 | 0.6730 | 0.3270 | 3.2015 | 3072766.9187 | 2640694 | 1.1636 | | 1.1506 | 0.0924 | 7600 | 1.1645 | 0.0176 | 0.6731 | 0.3269 | 3.2042 | 3074975.6101 | 2640694 | 1.1645 | | 1.1697 | 0.0936 | 7700 | 1.1652 | 0.0176 | 0.6725 | 0.3275 | 3.2064 | 3076775.2280 | 2640694 | 1.1651 | | 1.1528 | 0.0948 | 7800 | 1.1655 | 0.0176 | 0.6725 | 0.3275 | 3.2073 | 3077512.0928 | 2640694 | 1.1654 | | 1.2377 | 0.0960 | 7900 | 1.1634 | 0.0176 | 0.6729 | 0.3271 | 3.2008 | 3072145.2961 | 2640694 | 1.1634 | | 1.1785 | 0.0972 | 8000 | 1.1635 | 0.0176 | 0.6727 | 0.3273 | 3.2010 | 3072316.5308 | 2640694 | 1.1635 | | 1.1347 | 0.0985 | 8100 | 1.1645 | 0.0176 | 0.6731 | 0.3269 | 3.2042 | 3074952.6060 | 2640694 | 1.1644 | | 1.1646 | 0.0997 | 8200 | 1.1617 | 0.0176 | 0.6734 | 0.3266 | 3.1952 | 3067585.1196 | 2640694 | 1.1617 | | 1.1023 | 0.1009 | 8300 | 1.1650 | 0.0176 | 0.6726 | 0.3274 | 3.2059 | 3076369.6384 | 2640694 | 1.1650 | | 1.305 | 0.1021 | 8400 | 1.1633 | 0.0176 | 0.6731 | 0.3269 | 3.2004 | 3071868.8569 | 2640694 | 1.1633 | | 1.1284 | 0.1033 | 8500 | 1.1632 | 0.0176 | 0.6727 | 0.3273 | 3.1999 | 3071414.3730 | 2640694 | 1.1631 | | 1.1859 | 0.1045 | 8600 | 1.1628 | 0.0176 | 0.6728 | 0.3272 | 3.1987 | 3070480.0918 | 2640694 | 1.1628 | | 1.1338 | 0.1057 | 8700 | 1.1629 | 0.0176 | 0.6730 | 0.3270 | 3.1991 | 3070798.8152 | 2640694 | 1.1629 | | 1.1901 | 0.1070 | 8800 | 1.1622 | 0.0176 | 0.6732 | 0.3268 | 3.1967 | 3068772.8962 | 2640694 | 1.1621 | | 1.1468 | 0.1082 | 8900 | 1.1629 | 0.0176 | 0.6729 | 0.3271 | 3.1989 | 3070601.0066 | 2640694 | 1.1628 | | 1.1551 | 0.1094 | 9000 | 1.1642 | 0.0176 | 0.6730 | 0.3270 | 3.2031 | 3074052.3931 | 2640694 | 1.1641 | | 1.1415 | 0.1106 | 9100 | 1.1629 | 0.0176 | 0.6733 | 0.3267 | 3.1989 | 3070590.1243 | 2640694 | 1.1628 | | 1.16 | 0.1118 | 9200 | 1.1642 | 0.0176 | 0.6730 | 0.3270 | 3.2032 | 3074185.7690 | 2640694 | 1.1642 | | 1.0755 | 0.1130 | 9300 | 1.1619 | 0.0176 | 0.6735 | 0.3265 | 3.1958 | 3068029.6382 | 2640694 | 1.1618 | | 1.1747 | 0.1143 | 9400 | 1.1636 | 0.0176 | 0.6726 | 0.3274 | 3.2013 | 3072566.6504 | 2640694 | 1.1635 | | 1.1341 | 0.1155 | 9500 | 1.1641 | 0.0176 | 0.6731 | 0.3269 | 3.2029 | 3073882.4390 | 2640694 | 1.1640 | | 1.1902 | 0.1167 | 9600 | 1.1660 | 0.0176 | 0.6724 | 0.3276 | 3.2089 | 3078847.6938 | 2640694 | 1.1659 | | 1.0798 | 0.1179 | 9700 | 1.1614 | 0.0176 | 0.6736 | 0.3264 | 3.1941 | 3066681.2961 | 2640694 | 1.1613 | | 1.1922 | 0.1191 | 9800 | 1.1617 | 0.0176 | 0.6735 | 0.3265 | 3.1951 | 3067511.3120 | 2640694 | 1.1616 | | 1.2114 | 0.1203 | 9900 | 1.1619 | 0.0176 | 0.6732 | 0.3268 | 3.1957 | 3068005.4448 | 2640694 | 1.1618 | | 1.1804 | 0.1215 | 10000 | 1.1616 | 0.0176 | 0.6734 | 0.3266 | 3.1948 | 3067235.7944 | 2640694 | 1.1615 | | 1.176 | 0.1228 | 10100 | 1.1619 | 0.0176 | 0.6732 | 0.3268 | 3.1959 | 3068106.4937 | 2640694 | 1.1619 | | 1.2139 | 0.1240 | 10200 | 1.1619 | 0.0176 | 0.6734 | 0.3266 | 3.1957 | 3068001.6694 | 2640694 | 1.1618 | | 1.1789 | 0.1252 | 10300 | 1.1624 | 0.0176 | 0.6733 | 0.3267 | 3.1974 | 3069345.8162 | 2640694 | 1.1623 | | 1.1612 | 0.1264 | 10400 | 1.1631 | 0.0176 | 0.6728 | 0.3272 | 3.1996 | 3071231.4404 | 2640694 | 1.1630 | | 1.1193 | 0.1276 | 10500 | 1.1621 | 0.0176 | 0.6733 | 0.3267 | 3.1966 | 3068736.4324 | 2640694 | 1.1621 | | 1.195 | 0.1288 | 10600 | 1.1626 | 0.0176 | 0.6734 | 0.3266 | 3.1982 | 3070003.9492 | 2640694 | 1.1626 | | 1.1626 | 0.1301 | 10700 | 1.1610 | 0.0176 | 0.6738 | 0.3262 | 3.1929 | 3065641.1260 | 2640694 | 1.1609 | | 1.1222 | 0.1313 | 10800 | 1.1603 | 0.0176 | 0.6738 | 0.3262 | 3.1907 | 3063800.4529 | 2640694 | 1.1602 | | 1.1739 | 0.1325 | 10900 | 1.1641 | 0.0176 | 0.6729 | 0.3271 | 3.2028 | 3073868.6157 | 2640694 | 1.1640 | | 1.2339 | 0.1337 | 11000 | 1.1625 | 0.0176 | 0.6731 | 0.3269 | 3.1976 | 3069540.3071 | 2640694 | 1.1624 | | 1.1053 | 0.1349 | 11100 | 1.1631 | 0.0176 | 0.6730 | 0.3270 | 3.1998 | 3071342.9424 | 2640694 | 1.1631 | | 1.1766 | 0.1361 | 11200 | 1.1617 | 0.0176 | 0.6735 | 0.3265 | 3.1952 | 3067526.0657 | 2640694 | 1.1616 | | 1.1243 | 0.1373 | 11300 | 1.1641 | 0.0176 | 0.6729 | 0.3271 | 3.2030 | 3073972.6052 | 2640694 | 1.1641 | | 1.078 | 0.1386 | 11400 | 1.1613 | 0.0176 | 0.6733 | 0.3267 | 3.1938 | 3066369.2188 | 2640694 | 1.1612 | | 1.1873 | 0.1398 | 11500 | 1.1605 | 0.0176 | 0.6739 | 0.3261 | 3.1914 | 3064406.8967 | 2640694 | 1.1605 | | 1.1713 | 0.1410 | 11600 | 1.1614 | 0.0176 | 0.6732 | 0.3268 | 3.1944 | 3066883.8167 | 2640694 | 1.1614 | | 1.1254 | 0.1422 | 11700 | 1.1632 | 0.0176 | 0.6731 | 0.3269 | 3.2000 | 3071529.9399 | 2640694 | 1.1632 | | 1.1755 | 0.1434 | 11800 | 1.1611 | 0.0176 | 0.6737 | 0.3263 | 3.1933 | 3066010.9358 | 2640694 | 1.1611 | | 1.1403 | 0.1446 | 11900 | 1.1610 | 0.0176 | 0.6735 | 0.3265 | 3.1931 | 3065800.0869 | 2640694 | 1.1610 | | 1.1611 | 0.1459 | 12000 | 1.1607 | 0.0176 | 0.6736 | 0.3264 | 3.1920 | 3064907.2505 | 2640694 | 1.1606 | | 1.1446 | 0.1471 | 12100 | 1.1599 | 0.0176 | 0.6738 | 0.3262 | 3.1895 | 3062824.7629 | 2640694 | 1.1599 | | 1.1479 | 0.1483 | 12200 | 1.1615 | 0.0176 | 0.6733 | 0.3267 | 3.1947 | 3067116.6277 | 2640694 | 1.1615 | | 1.121 | 0.1495 | 12300 | 1.1607 | 0.0176 | 0.6738 | 0.3262 | 3.1920 | 3064918.3704 | 2640694 | 1.1606 | | 1.1693 | 0.1507 | 12400 | 1.1620 | 0.0176 | 0.6730 | 0.3270 | 3.1962 | 3068421.9802 | 2640694 | 1.1620 | | 1.1925 | 0.1519 | 12500 | 1.1634 | 0.0176 | 0.6728 | 0.3272 | 3.2007 | 3072131.1509 | 2640694 | 1.1634 | | 1.143 | 0.1531 | 12600 | 1.1605 | 0.0176 | 0.6739 | 0.3261 | 3.1912 | 3064269.9768 | 2640694 | 1.1604 | | 1.2012 | 0.1544 | 12700 | 1.1602 | 0.0176 | 0.6736 | 0.3264 | 3.1905 | 3063670.3145 | 2640694 | 1.1602 | | 1.1563 | 0.1556 | 12800 | 1.1600 | 0.0176 | 0.6736 | 0.3264 | 3.1897 | 3062974.3860 | 2640694 | 1.1599 | | 1.1444 | 0.1568 | 12900 | 1.1601 | 0.0176 | 0.6737 | 0.3263 | 3.1901 | 3063330.9849 | 2640694 | 1.1600 | | 1.1653 | 0.1580 | 13000 | 1.1597 | 0.0176 | 0.6737 | 0.3263 | 3.1888 | 3062304.0293 | 2640694 | 1.1597 | | 1.1816 | 0.1592 | 13100 | 1.1614 | 0.0176 | 0.6734 | 0.3266 | 3.1943 | 3066842.8777 | 2640694 | 1.1614 | | 1.1789 | 0.1604 | 13200 | 1.1595 | 0.0176 | 0.6739 | 0.3261 | 3.1882 | 3061752.7795 | 2640694 | 1.1595 | | 1.152 | 0.1617 | 13300 | 1.1621 | 0.0176 | 0.6733 | 0.3267 | 3.1966 | 3068682.2654 | 2640694 | 1.1621 | | 1.1263 | 0.1629 | 13400 | 1.1612 | 0.0176 | 0.6735 | 0.3265 | 3.1935 | 3066184.1140 | 2640694 | 1.1611 | | 1.1394 | 0.1641 | 13500 | 1.1608 | 0.0176 | 0.6739 | 0.3261 | 3.1922 | 3065082.8247 | 2640694 | 1.1607 | | 1.1528 | 0.1653 | 13600 | 1.1604 | 0.0176 | 0.6737 | 0.3263 | 3.1910 | 3064126.5076 | 2640694 | 1.1603 | | 1.1887 | 0.1665 | 13700 | 1.1613 | 0.0176 | 0.6734 | 0.3266 | 3.1939 | 3066501.9023 | 2640694 | 1.1612 | | 1.1385 | 0.1677 | 13800 | 1.1585 | 0.0176 | 0.6742 | 0.3258 | 3.1849 | 3059060.5854 | 2640694 | 1.1584 | | 1.1537 | 0.1689 | 13900 | 1.1609 | 0.0176 | 0.6739 | 0.3261 | 3.1928 | 3065549.7197 | 2640694 | 1.1609 | | 1.1059 | 0.1702 | 14000 | 1.1593 | 0.0176 | 0.6740 | 0.3260 | 3.1875 | 3061203.8608 | 2640694 | 1.1592 | | 1.1647 | 0.1714 | 14100 | 1.1596 | 0.0176 | 0.6738 | 0.3262 | 3.1886 | 3062096.2544 | 2640694 | 1.1596 | | 1.1572 | 0.1726 | 14200 | 1.1610 | 0.0176 | 0.6737 | 0.3263 | 3.1928 | 3065596.2395 | 2640694 | 1.1609 | | 1.1799 | 0.1738 | 14300 | 1.1614 | 0.0176 | 0.6737 | 0.3263 | 3.1941 | 3066674.1536 | 2640694 | 1.1613 | | 1.1104 | 0.1750 | 14400 | 1.1592 | 0.0176 | 0.6739 | 0.3261 | 3.1871 | 3060834.1904 | 2640694 | 1.1591 | | 1.245 | 0.1762 | 14500 | 1.1610 | 0.0176 | 0.6733 | 0.3267 | 3.1929 | 3065691.8489 | 2640694 | 1.1609 | | 1.146 | 0.1775 | 14600 | 1.1594 | 0.0176 | 0.6738 | 0.3262 | 3.1878 | 3061419.4602 | 2640694 | 1.1593 | | 1.1973 | 0.1787 | 14700 | 1.1594 | 0.0176 | 0.6740 | 0.3260 | 3.1877 | 3061370.3850 | 2640694 | 1.1593 | | 1.2136 | 0.1799 | 14800 | 1.1598 | 0.0176 | 0.6739 | 0.3261 | 3.1892 | 3062587.5840 | 2640694 | 1.1598 | | 1.2153 | 0.1811 | 14900 | 1.1596 | 0.0176 | 0.6739 | 0.3261 | 3.1886 | 3062112.1455 | 2640694 | 1.1596 | | 1.1669 | 0.1823 | 15000 | 1.1628 | 0.0176 | 0.6735 | 0.3265 | 3.1987 | 3070431.8621 | 2640694 | 1.1627 | | 1.1851 | 0.1835 | 15100 | 1.1595 | 0.0176 | 0.6740 | 0.3260 | 3.1881 | 3061648.6204 | 2640694 | 1.1594 | | 1.1217 | 0.1847 | 15200 | 1.1599 | 0.0176 | 0.6738 | 0.3262 | 3.1894 | 3062796.7461 | 2640694 | 1.1598 | | 1.1146 | 0.1860 | 15300 | 1.1596 | 0.0176 | 0.6739 | 0.3261 | 3.1885 | 3062049.1372 | 2640694 | 1.1596 | | 1.1907 | 0.1872 | 15400 | 1.1593 | 0.0176 | 0.6740 | 0.3260 | 3.1876 | 3061234.1401 | 2640694 | 1.1593 | | 1.0886 | 0.1884 | 15500 | 1.1595 | 0.0176 | 0.6739 | 0.3261 | 3.1881 | 3061674.5730 | 2640694 | 1.1594 | | 1.1937 | 0.1896 | 15600 | 1.1598 | 0.0176 | 0.6737 | 0.3263 | 3.1890 | 3062413.4280 | 2640694 | 1.1597 | | 1.1626 | 0.1908 | 15700 | 1.1597 | 0.0176 | 0.6742 | 0.3258 | 3.1887 | 3062212.6938 | 2640694 | 1.1596 | | 1.2494 | 0.1920 | 15800 | 1.1587 | 0.0176 | 0.6740 | 0.3260 | 3.1855 | 3059546.1497 | 2640694 | 1.1586 | | 1.1418 | 0.1933 | 15900 | 1.1610 | 0.0176 | 0.6738 | 0.3262 | 3.1930 | 3065733.0254 | 2640694 | 1.1610 | | 1.1828 | 0.1945 | 16000 | 1.1595 | 0.0176 | 0.6739 | 0.3261 | 3.1882 | 3061728.8132 | 2640694 | 1.1594 | | 1.14 | 0.1957 | 16100 | 1.1597 | 0.0176 | 0.6743 | 0.3257 | 3.1887 | 3062209.3459 | 2640694 | 1.1596 | | 1.1183 | 0.1969 | 16200 | 1.1583 | 0.0176 | 0.6740 | 0.3260 | 3.1842 | 3058434.2256 | 2640694 | 1.1582 | | 1.1465 | 0.1981 | 16300 | 1.1584 | 0.0176 | 0.6745 | 0.3255 | 3.1846 | 3058820.6899 | 2640694 | 1.1583 | | 1.1617 | 0.1993 | 16400 | 1.1593 | 0.0176 | 0.6740 | 0.3260 | 3.1875 | 3061162.5774 | 2640694 | 1.1592 | | 1.0919 | 0.2005 | 16500 | 1.1588 | 0.0176 | 0.6739 | 0.3261 | 3.1860 | 3059965.8423 | 2640694 | 1.1588 | | 1.1994 | 0.2018 | 16600 | 1.1640 | 0.0176 | 0.6725 | 0.3275 | 3.2026 | 3073629.5388 | 2640694 | 1.1639 | | 1.1429 | 0.2030 | 16700 | 1.1581 | 0.0176 | 0.6742 | 0.3258 | 3.1837 | 3058069.1157 | 2640694 | 1.1581 | | 1.0721 | 0.2042 | 16800 | 1.1575 | 0.0176 | 0.6743 | 0.3257 | 3.1818 | 3056488.1675 | 2640694 | 1.1575 | | 1.1548 | 0.2054 | 16900 | 1.1582 | 0.0176 | 0.6742 | 0.3258 | 3.1839 | 3058193.0916 | 2640694 | 1.1581 | | 1.1681 | 0.2066 | 17000 | 1.1604 | 0.0176 | 0.6736 | 0.3264 | 3.1909 | 3064039.8254 | 2640694 | 1.1603 | | 1.2143 | 0.2078 | 17100 | 1.1591 | 0.0176 | 0.6741 | 0.3259 | 3.1868 | 3060632.8459 | 2640694 | 1.1590 | | 1.1723 | 0.2091 | 17200 | 1.1581 | 0.0176 | 0.6742 | 0.3258 | 3.1839 | 3058166.7698 | 2640694 | 1.1581 | | 1.2437 | 0.2103 | 17300 | 1.1575 | 0.0176 | 0.6742 | 0.3258 | 3.1819 | 3056575.8796 | 2640694 | 1.1575 | | 1.1073 | 0.2115 | 17400 | 1.1584 | 0.0176 | 0.6745 | 0.3255 | 3.1847 | 3058865.7974 | 2640694 | 1.1584 | | 1.2129 | 0.2127 | 17500 | 1.1586 | 0.0176 | 0.6741 | 0.3259 | 3.1854 | 3059438.1680 | 2640694 | 1.1586 | | 1.1207 | 0.2139 | 17600 | 1.1587 | 0.0176 | 0.6740 | 0.3260 | 3.1855 | 3059554.7896 | 2640694 | 1.1586 | | 1.1606 | 0.2151 | 17700 | 1.1583 | 0.0176 | 0.6741 | 0.3259 | 3.1844 | 3058622.5332 | 2640694 | 1.1583 | | 1.122 | 0.2163 | 17800 | 1.1598 | 0.0176 | 0.6740 | 0.3260 | 3.1891 | 3062492.9246 | 2640694 | 1.1597 | | 1.1353 | 0.2176 | 17900 | 1.1587 | 0.0176 | 0.6740 | 0.3260 | 3.1857 | 3059664.1055 | 2640694 | 1.1587 | | 1.1686 | 0.2188 | 18000 | 1.1583 | 0.0176 | 0.6739 | 0.3261 | 3.1843 | 3058545.4570 | 2640694 | 1.1582 | | 1.2439 | 0.2200 | 18100 | 1.1579 | 0.0176 | 0.6745 | 0.3255 | 3.1830 | 3057444.1453 | 2640694 | 1.1578 | | 1.1175 | 0.2212 | 18200 | 1.1579 | 0.0176 | 0.6743 | 0.3257 | 3.1832 | 3057607.9326 | 2640694 | 1.1579 | | 1.1409 | 0.2224 | 18300 | 1.1582 | 0.0176 | 0.6743 | 0.3257 | 3.1839 | 3058221.2646 | 2640694 | 1.1581 | | 1.055 | 0.2236 | 18400 | 1.1598 | 0.0176 | 0.6740 | 0.3260 | 3.1892 | 3062630.2820 | 2640694 | 1.1598 | | 1.1498 | 0.2249 | 18500 | 1.1556 | 0.0176 | 0.6748 | 0.3252 | 3.1759 | 3051587.0967 | 2640694 | 1.1556 | | 1.1289 | 0.2261 | 18600 | 1.1582 | 0.0176 | 0.6743 | 0.3257 | 3.1841 | 3058333.8699 | 2640694 | 1.1582 | | 1.1356 | 0.2273 | 18700 | 1.1566 | 0.0176 | 0.6745 | 0.3255 | 3.1788 | 3053974.6096 | 2640694 | 1.1565 | | 1.1736 | 0.2285 | 18800 | 1.1591 | 0.0176 | 0.6742 | 0.3258 | 3.1867 | 3060561.1511 | 2640694 | 1.1590 | | 1.1691 | 0.2297 | 18900 | 1.1582 | 0.0176 | 0.6740 | 0.3260 | 3.1840 | 3058257.6523 | 2640694 | 1.1581 | | 1.134 | 0.2309 | 19000 | 1.1578 | 0.0176 | 0.6740 | 0.3260 | 3.1828 | 3057276.9080 | 2640694 | 1.1578 | | 1.1668 | 0.2321 | 19100 | 1.1597 | 0.0176 | 0.6735 | 0.3265 | 3.1889 | 3062343.9336 | 2640694 | 1.1597 | | 1.201 | 0.2334 | 19200 | 1.1556 | 0.0176 | 0.6749 | 0.3251 | 3.1758 | 3051510.4229 | 2640694 | 1.1556 | | 1.1768 | 0.2346 | 19300 | 1.1572 | 0.0176 | 0.6745 | 0.3255 | 3.1808 | 3055642.4592 | 2640694 | 1.1571 | | 1.1172 | 0.2358 | 19400 | 1.1575 | 0.0176 | 0.6747 | 0.3253 | 3.1819 | 3056582.8535 | 2640694 | 1.1575 | | 1.1155 | 0.2370 | 19500 | 1.1596 | 0.0176 | 0.6736 | 0.3264 | 3.1884 | 3061937.7927 | 2640694 | 1.1595 | | 1.1304 | 0.2382 | 19600 | 1.1574 | 0.0176 | 0.6747 | 0.3253 | 3.1813 | 3056088.8354 | 2640694 | 1.1573 | | 1.1149 | 0.2394 | 19700 | 1.1563 | 0.0176 | 0.6744 | 0.3256 | 3.1779 | 3053206.3867 | 2640694 | 1.1562 | | 1.1282 | 0.2407 | 19800 | 1.1591 | 0.0176 | 0.6738 | 0.3262 | 3.1869 | 3060679.4199 | 2640694 | 1.1590 | | 1.1094 | 0.2419 | 19900 | 1.1557 | 0.0176 | 0.6750 | 0.3250 | 3.1761 | 3051718.3398 | 2640694 | 1.1557 | | 1.1305 | 0.2431 | 20000 | 1.1579 | 0.0176 | 0.6744 | 0.3256 | 3.1830 | 3057432.7402 | 2640694 | 1.1578 | | 1.1217 | 0.2443 | 20100 | 1.1579 | 0.0176 | 0.6744 | 0.3256 | 3.1829 | 3057406.5183 | 2640694 | 1.1578 | | 1.1018 | 0.2455 | 20200 | 1.1580 | 0.0176 | 0.6743 | 0.3257 | 3.1833 | 3057711.2183 | 2640694 | 1.1579 | | 1.1233 | 0.2467 | 20300 | 1.1576 | 0.0176 | 0.6745 | 0.3255 | 3.1822 | 3056805.7334 | 2640694 | 1.1576 | | 1.1806 | 0.2479 | 20400 | 1.1553 | 0.0176 | 0.6751 | 0.3249 | 3.1748 | 3050675.1475 | 2640694 | 1.1553 | | 1.1639 | 0.2492 | 20500 | 1.1575 | 0.0176 | 0.6744 | 0.3256 | 3.1817 | 3056413.0237 | 2640694 | 1.1574 | | 1.2077 | 0.2504 | 20600 | 1.1582 | 0.0176 | 0.6742 | 0.3258 | 3.1839 | 3058226.8184 | 2640694 | 1.1581 | | 1.1642 | 0.2516 | 20700 | 1.1578 | 0.0176 | 0.6743 | 0.3257 | 3.1829 | 3057398.4604 | 2640694 | 1.1578 | | 1.1108 | 0.2528 | 20800 | 1.1563 | 0.0176 | 0.6745 | 0.3255 | 3.1779 | 3053188.6060 | 2640694 | 1.1562 | | 1.2739 | 0.2540 | 20900 | 1.1570 | 0.0176 | 0.6743 | 0.3257 | 3.1801 | 3055029.2732 | 2640694 | 1.1569 | | 1.1501 | 0.2552 | 21000 | 1.1568 | 0.0176 | 0.6743 | 0.3257 | 3.1795 | 3054542.2339 | 2640694 | 1.1567 | | 1.1784 | 0.2565 | 21100 | 1.1572 | 0.0176 | 0.6744 | 0.3256 | 3.1808 | 3055648.6885 | 2640694 | 1.1571 | | 1.1773 | 0.2577 | 21200 | 1.1572 | 0.0176 | 0.6746 | 0.3254 | 3.1809 | 3055736.2913 | 2640694 | 1.1572 | | 1.1344 | 0.2589 | 21300 | 1.1567 | 0.0176 | 0.6745 | 0.3255 | 3.1793 | 3054387.5400 | 2640694 | 1.1567 | | 1.1229 | 0.2601 | 21400 | 1.1587 | 0.0176 | 0.6736 | 0.3264 | 3.1857 | 3059723.5522 | 2640694 | 1.1587 | | 1.1834 | 0.2613 | 21500 | 1.1577 | 0.0176 | 0.6743 | 0.3257 | 3.1826 | 3057138.7595 | 2640694 | 1.1577 | | 1.0716 | 0.2625 | 21600 | 1.1599 | 0.0176 | 0.6740 | 0.3260 | 3.1896 | 3062903.3074 | 2640694 | 1.1599 | | 1.1996 | 0.2637 | 21700 | 1.1567 | 0.0176 | 0.6746 | 0.3254 | 3.1794 | 3054449.4414 | 2640694 | 1.1567 | | 1.126 | 0.2650 | 21800 | 1.1547 | 0.0176 | 0.6749 | 0.3251 | 3.1729 | 3049101.4055 | 2640694 | 1.1547 | | 1.1002 | 0.2662 | 21900 | 1.1587 | 0.0176 | 0.6742 | 0.3258 | 3.1857 | 3059717.2148 | 2640694 | 1.1587 | | 1.2212 | 0.2674 | 22000 | 1.1562 | 0.0176 | 0.6743 | 0.3257 | 3.1776 | 3053010.4060 | 2640694 | 1.1561 | | 1.2189 | 0.2686 | 22100 | 1.1581 | 0.0176 | 0.6743 | 0.3257 | 3.1837 | 3058005.3145 | 2640694 | 1.1580 | | 1.139 | 0.2698 | 22200 | 1.1583 | 0.0176 | 0.6742 | 0.3258 | 3.1844 | 3058605.1790 | 2640694 | 1.1583 | | 1.0423 | 0.2710 | 22300 | 1.1571 | 0.0176 | 0.6748 | 0.3252 | 3.1806 | 3055489.0696 | 2640694 | 1.1571 | | 1.1933 | 0.2723 | 22400 | 1.1586 | 0.0176 | 0.6742 | 0.3258 | 3.1853 | 3059374.7837 | 2640694 | 1.1585 | | 1.1148 | 0.2735 | 22500 | 1.1557 | 0.0176 | 0.6748 | 0.3252 | 3.1759 | 3051577.6208 | 2640694 | 1.1556 | | 1.1507 | 0.2747 | 22600 | 1.1573 | 0.0176 | 0.6742 | 0.3258 | 3.1812 | 3055988.3723 | 2640694 | 1.1573 | | 1.0731 | 0.2759 | 22700 | 1.1569 | 0.0176 | 0.6749 | 0.3251 | 3.1799 | 3054912.9124 | 2640694 | 1.1569 | | 1.1549 | 0.2771 | 22800 | 1.1569 | 0.0176 | 0.6746 | 0.3254 | 3.1800 | 3054955.6704 | 2640694 | 1.1569 | | 1.1729 | 0.2783 | 22900 | 1.1587 | 0.0176 | 0.6743 | 0.3257 | 3.1857 | 3059722.4788 | 2640694 | 1.1587 | | 1.0878 | 0.2796 | 23000 | 1.1561 | 0.0176 | 0.6745 | 0.3255 | 3.1774 | 3052838.8301 | 2640694 | 1.1561 | | 1.1741 | 0.2808 | 23100 | 1.1565 | 0.0176 | 0.6746 | 0.3254 | 3.1787 | 3053911.9614 | 2640694 | 1.1565 | | 1.1358 | 0.2820 | 23200 | 1.1568 | 0.0176 | 0.6748 | 0.3252 | 3.1796 | 3054644.2361 | 2640694 | 1.1568 | | 1.1769 | 0.2832 | 23300 | 1.1580 | 0.0176 | 0.6739 | 0.3261 | 3.1833 | 3057730.0439 | 2640694 | 1.1579 | | 1.2632 | 0.2844 | 23400 | 1.1554 | 0.0176 | 0.6749 | 0.3251 | 3.1751 | 3050865.0029 | 2640694 | 1.1553 | | 1.1196 | 0.2856 | 23500 | 1.1555 | 0.0176 | 0.6750 | 0.3250 | 3.1753 | 3051056.8496 | 2640694 | 1.1554 | | 1.1096 | 0.2868 | 23600 | 1.1570 | 0.0176 | 0.6747 | 0.3253 | 3.1803 | 3055211.7085 | 2640694 | 1.1570 | | 1.1172 | 0.2881 | 23700 | 1.1568 | 0.0176 | 0.6745 | 0.3255 | 3.1795 | 3054586.7615 | 2640694 | 1.1567 | | 1.2343 | 0.2893 | 23800 | 1.1564 | 0.0176 | 0.6745 | 0.3255 | 3.1785 | 3053684.3311 | 2640694 | 1.1564 | | 1.1465 | 0.2905 | 23900 | 1.1564 | 0.0176 | 0.6746 | 0.3254 | 3.1782 | 3053508.1870 | 2640694 | 1.1563 | | 1.089 | 0.2917 | 24000 | 1.1577 | 0.0176 | 0.6741 | 0.3259 | 3.1823 | 3056865.4199 | 2640694 | 1.1576 | | 1.1039 | 0.2929 | 24100 | 1.1564 | 0.0176 | 0.6747 | 0.3253 | 3.1783 | 3053590.7302 | 2640694 | 1.1564 | | 1.1556 | 0.2941 | 24200 | 1.1560 | 0.0176 | 0.6747 | 0.3253 | 3.1771 | 3052556.0142 | 2640694 | 1.1560 | | 1.1505 | 0.2954 | 24300 | 1.1561 | 0.0176 | 0.6745 | 0.3255 | 3.1773 | 3052745.3689 | 2640694 | 1.1560 | | 1.1267 | 0.2966 | 24400 | 1.1567 | 0.0176 | 0.6743 | 0.3257 | 3.1792 | 3054297.0378 | 2640694 | 1.1566 | | 1.1601 | 0.2978 | 24500 | 1.1546 | 0.0176 | 0.6752 | 0.3248 | 3.1725 | 3048696.5701 | 2640694 | 1.1545 | | 1.1033 | 0.2990 | 24600 | 1.1564 | 0.0176 | 0.6745 | 0.3255 | 3.1784 | 3053611.8096 | 2640694 | 1.1564 | | 1.2068 | 0.3002 | 24700 | 1.1559 | 0.0176 | 0.6745 | 0.3255 | 3.1766 | 3052144.9099 | 2640694 | 1.1558 | | 1.17 | 0.3014 | 24800 | 1.1565 | 0.0176 | 0.6744 | 0.3256 | 3.1786 | 3053812.6643 | 2640694 | 1.1564 | | 1.1479 | 0.3026 | 24900 | 1.1557 | 0.0176 | 0.6748 | 0.3252 | 3.1762 | 3051773.0249 | 2640694 | 1.1557 | | 1.1891 | 0.3039 | 25000 | 1.1574 | 0.0176 | 0.6744 | 0.3256 | 3.1816 | 3056286.5071 | 2640694 | 1.1574 | | 1.1387 | 0.3051 | 25100 | 1.1563 | 0.0176 | 0.6747 | 0.3253 | 3.1781 | 3053404.5894 | 2640694 | 1.1563 | | 1.1631 | 0.3063 | 25200 | 1.1561 | 0.0176 | 0.6746 | 0.3254 | 3.1774 | 3052803.3201 | 2640694 | 1.1561 | | 1.1511 | 0.3075 | 25300 | 1.1552 | 0.0176 | 0.6748 | 0.3252 | 3.1746 | 3050496.4209 | 2640694 | 1.1552 | | 1.2023 | 0.3087 | 25400 | 1.1560 | 0.0176 | 0.6748 | 0.3252 | 3.1772 | 3052609.0710 | 2640694 | 1.1560 | | 1.1613 | 0.3099 | 25500 | 1.1557 | 0.0176 | 0.6748 | 0.3252 | 3.1762 | 3051840.3647 | 2640694 | 1.1557 | | 1.1439 | 0.3112 | 25600 | 1.1559 | 0.0176 | 0.6747 | 0.3253 | 3.1769 | 3052373.8560 | 2640694 | 1.1559 | | 1.1234 | 0.3124 | 25700 | 1.1549 | 0.0176 | 0.6750 | 0.3250 | 3.1736 | 3049659.1169 | 2640694 | 1.1549 | | 1.1821 | 0.3136 | 25800 | 1.1568 | 0.0176 | 0.6742 | 0.3258 | 3.1794 | 3054498.5591 | 2640694 | 1.1567 | | 1.0372 | 0.3148 | 25900 | 1.1559 | 0.0176 | 0.6749 | 0.3251 | 3.1766 | 3052163.7593 | 2640694 | 1.1558 | | 1.0582 | 0.3160 | 26000 | 1.1556 | 0.0176 | 0.6743 | 0.3257 | 3.1757 | 3051387.1663 | 2640694 | 1.1555 | | 1.1653 | 0.3172 | 26100 | 1.1572 | 0.0176 | 0.6747 | 0.3253 | 3.1810 | 3055767.7710 | 2640694 | 1.1572 | | 1.1246 | 0.3184 | 26200 | 1.1565 | 0.0176 | 0.6746 | 0.3254 | 3.1785 | 3053723.1353 | 2640694 | 1.1564 | | 1.0971 | 0.3197 | 26300 | 1.1559 | 0.0176 | 0.6748 | 0.3252 | 3.1769 | 3052353.8901 | 2640694 | 1.1559 | | 1.1854 | 0.3209 | 26400 | 1.1566 | 0.0176 | 0.6747 | 0.3253 | 3.1790 | 3054100.7295 | 2640694 | 1.1566 | | 1.1482 | 0.3221 | 26500 | 1.1584 | 0.0176 | 0.6740 | 0.3260 | 3.1845 | 3058743.6472 | 2640694 | 1.1583 | | 1.1306 | 0.3233 | 26600 | 1.1568 | 0.0176 | 0.6743 | 0.3257 | 3.1796 | 3054613.3870 | 2640694 | 1.1567 | | 1.1994 | 0.3245 | 26700 | 1.1555 | 0.0176 | 0.6748 | 0.3252 | 3.1754 | 3051139.9932 | 2640694 | 1.1554 | | 1.1627 | 0.3257 | 26800 | 1.1552 | 0.0176 | 0.6750 | 0.3250 | 3.1746 | 3050498.8501 | 2640694 | 1.1552 | | 1.1925 | 0.3270 | 26900 | 1.1554 | 0.0176 | 0.6747 | 0.3253 | 3.1751 | 3050916.1455 | 2640694 | 1.1553 | | 1.1113 | 0.3282 | 27000 | 1.1574 | 0.0176 | 0.6744 | 0.3256 | 3.1814 | 3056111.8887 | 2640694 | 1.1573 | | 1.1621 | 0.3294 | 27100 | 1.1550 | 0.0176 | 0.6749 | 0.3251 | 3.1740 | 3049968.6670 | 2640694 | 1.1550 | | 1.2042 | 0.3306 | 27200 | 1.1555 | 0.0176 | 0.6749 | 0.3251 | 3.1753 | 3051046.4299 | 2640694 | 1.1554 | | 1.1409 | 0.3318 | 27300 | 1.1549 | 0.0176 | 0.6749 | 0.3251 | 3.1736 | 3049689.4167 | 2640694 | 1.1549 | | 1.1275 | 0.3330 | 27400 | 1.1597 | 0.0176 | 0.6736 | 0.3264 | 3.1888 | 3062259.1294 | 2640694 | 1.1596 | | 1.145 | 0.3342 | 27500 | 1.1563 | 0.0176 | 0.6744 | 0.3256 | 3.1779 | 3053240.3804 | 2640694 | 1.1562 | | 1.1798 | 0.3355 | 27600 | 1.1577 | 0.0176 | 0.6739 | 0.3261 | 3.1824 | 3056950.8601 | 2640694 | 1.1576 | | 1.1323 | 0.3367 | 27700 | 1.1567 | 0.0176 | 0.6743 | 0.3257 | 3.1791 | 3054244.6946 | 2640694 | 1.1566 | | 1.1045 | 0.3379 | 27800 | 1.1541 | 0.0176 | 0.6751 | 0.3249 | 3.1711 | 3047577.0356 | 2640694 | 1.1541 | | 1.0941 | 0.3391 | 27900 | 1.1554 | 0.0176 | 0.6743 | 0.3257 | 3.1750 | 3050810.7437 | 2640694 | 1.1553 | | 1.1462 | 0.3403 | 28000 | 1.1538 | 0.0176 | 0.6752 | 0.3248 | 3.1701 | 3046711.7087 | 2640694 | 1.1538 | | 1.1706 | 0.3415 | 28100 | 1.1567 | 0.0176 | 0.6745 | 0.3255 | 3.1791 | 3054254.1975 | 2640694 | 1.1566 | | 1.187 | 0.3428 | 28200 | 1.1544 | 0.0176 | 0.6751 | 0.3249 | 3.1721 | 3048393.2827 | 2640694 | 1.1544 | | 1.1951 | 0.3440 | 28300 | 1.1568 | 0.0176 | 0.6743 | 0.3257 | 3.1797 | 3054723.9248 | 2640694 | 1.1568 | | 1.2385 | 0.3452 | 28400 | 1.1563 | 0.0176 | 0.6748 | 0.3252 | 3.1780 | 3053306.5430 | 2640694 | 1.1563 | | 1.2146 | 0.3464 | 28500 | 1.1585 | 0.0176 | 0.6743 | 0.3257 | 3.1849 | 3059021.6895 | 2640694 | 1.1584 | | 1.1047 | 0.3476 | 28600 | 1.1537 | 0.0176 | 0.6752 | 0.3248 | 3.1697 | 3046436.9768 | 2640694 | 1.1537 | | 1.2395 | 0.3488 | 28700 | 1.1567 | 0.0176 | 0.6746 | 0.3254 | 3.1791 | 3054215.3247 | 2640694 | 1.1566 | | 1.1455 | 0.3500 | 28800 | 1.1543 | 0.0176 | 0.6750 | 0.3250 | 3.1716 | 3047995.7617 | 2640694 | 1.1542 | | 1.2534 | 0.3513 | 28900 | 1.1559 | 0.0176 | 0.6745 | 0.3255 | 3.1768 | 3052294.6233 | 2640694 | 1.1559 | | 1.1034 | 0.3525 | 29000 | 1.1565 | 0.0176 | 0.6745 | 0.3255 | 3.1785 | 3053747.9641 | 2640694 | 1.1564 | | 1.157 | 0.3537 | 29100 | 1.1551 | 0.0176 | 0.6752 | 0.3248 | 3.1742 | 3050111.0671 | 2640694 | 1.1550 | | 1.1596 | 0.3549 | 29200 | 1.1550 | 0.0176 | 0.6750 | 0.3250 | 3.1738 | 3049850.6907 | 2640694 | 1.1549 | | 1.2317 | 0.3561 | 29300 | 1.1545 | 0.0176 | 0.6748 | 0.3252 | 3.1721 | 3048415.6470 | 2640694 | 1.1544 | | 1.1159 | 0.3573 | 29400 | 1.1532 | 0.0176 | 0.6753 | 0.3247 | 3.1680 | 3044998.3337 | 2640694 | 1.1531 | | 1.1232 | 0.3586 | 29500 | 1.1564 | 0.0176 | 0.6743 | 0.3257 | 3.1783 | 3053586.8391 | 2640694 | 1.1564 | | 1.1446 | 0.3598 | 29600 | 1.1540 | 0.0176 | 0.6752 | 0.3248 | 3.1708 | 3047339.6987 | 2640694 | 1.1540 | | 1.1321 | 0.3610 | 29700 | 1.1542 | 0.0176 | 0.6749 | 0.3251 | 3.1712 | 3047674.3777 | 2640694 | 1.1541 | | 1.1211 | 0.3622 | 29800 | 1.1540 | 0.0176 | 0.6750 | 0.3250 | 3.1707 | 3047198.6497 | 2640694 | 1.1539 | | 1.0925 | 0.3634 | 29900 | 1.1558 | 0.0176 | 0.6745 | 0.3255 | 3.1764 | 3051990.1877 | 2640694 | 1.1558 | | 1.2852 | 0.3646 | 30000 | 1.1564 | 0.0176 | 0.6749 | 0.3251 | 3.1783 | 3053591.8352 | 2640694 | 1.1564 | | 1.1865 | 0.3658 | 30100 | 1.1559 | 0.0176 | 0.6750 | 0.3250 | 3.1768 | 3052327.5166 | 2640694 | 1.1559 | | 1.2031 | 0.3671 | 30200 | 1.1536 | 0.0176 | 0.6750 | 0.3250 | 3.1695 | 3046200.3301 | 2640694 | 1.1536 | | 1.0701 | 0.3683 | 30300 | 1.1548 | 0.0176 | 0.6748 | 0.3252 | 3.1732 | 3049345.3213 | 2640694 | 1.1548 | | 1.1507 | 0.3695 | 30400 | 1.1556 | 0.0176 | 0.6745 | 0.3255 | 3.1759 | 3051538.3765 | 2640694 | 1.1556 | | 1.1769 | 0.3707 | 30500 | 1.1563 | 0.0176 | 0.6745 | 0.3255 | 3.1778 | 3053143.1763 | 2640694 | 1.1562 | | 1.1134 | 0.3719 | 30600 | 1.1544 | 0.0176 | 0.6751 | 0.3249 | 3.1720 | 3048296.3533 | 2640694 | 1.1544 | | 1.1045 | 0.3731 | 30700 | 1.1562 | 0.0176 | 0.6747 | 0.3253 | 3.1775 | 3052908.4043 | 2640694 | 1.1561 | | 1.175 | 0.3744 | 30800 | 1.1568 | 0.0176 | 0.6747 | 0.3253 | 3.1795 | 3054549.2520 | 2640694 | 1.1567 | | 1.177 | 0.3756 | 30900 | 1.1541 | 0.0176 | 0.6751 | 0.3249 | 3.1710 | 3047476.2375 | 2640694 | 1.1540 | | 1.0942 | 0.3768 | 31000 | 1.1543 | 0.0176 | 0.6754 | 0.3246 | 3.1717 | 3048076.1941 | 2640694 | 1.1543 | | 1.2177 | 0.3780 | 31100 | 1.1549 | 0.0176 | 0.6752 | 0.3248 | 3.1737 | 3049699.1516 | 2640694 | 1.1549 | | 1.1143 | 0.3792 | 31200 | 1.1547 | 0.0176 | 0.6751 | 0.3249 | 3.1728 | 3048969.3032 | 2640694 | 1.1546 | | 1.1583 | 0.3804 | 31300 | 1.1549 | 0.0176 | 0.6749 | 0.3251 | 3.1734 | 3049464.7146 | 2640694 | 1.1548 | | 1.0879 | 0.3816 | 31400 | 1.1547 | 0.0176 | 0.6748 | 0.3252 | 3.1729 | 3049055.8755 | 2640694 | 1.1546 | | 1.1177 | 0.3829 | 31500 | 1.1541 | 0.0176 | 0.6751 | 0.3249 | 3.1711 | 3047533.1575 | 2640694 | 1.1541 | | 1.1405 | 0.3841 | 31600 | 1.1533 | 0.0176 | 0.6752 | 0.3248 | 3.1684 | 3045320.2744 | 2640694 | 1.1532 | | 1.1731 | 0.3853 | 31700 | 1.1550 | 0.0176 | 0.6748 | 0.3252 | 3.1737 | 3049762.1636 | 2640694 | 1.1549 | | 1.1524 | 0.3865 | 31800 | 1.1554 | 0.0176 | 0.6749 | 0.3251 | 3.1750 | 3050789.3904 | 2640694 | 1.1553 | | 1.0623 | 0.3877 | 31900 | 1.1542 | 0.0176 | 0.6753 | 0.3247 | 3.1713 | 3047733.4976 | 2640694 | 1.1541 | | 1.1189 | 0.3889 | 32000 | 1.1538 | 0.0176 | 0.6753 | 0.3247 | 3.1701 | 3046760.6277 | 2640694 | 1.1538 | | 1.1255 | 0.3902 | 32100 | 1.1548 | 0.0176 | 0.6751 | 0.3249 | 3.1732 | 3049299.0371 | 2640694 | 1.1547 | | 1.1812 | 0.3914 | 32200 | 1.1525 | 0.0176 | 0.6754 | 0.3246 | 3.1658 | 3043180.2385 | 2640694 | 1.1524 | | 1.2244 | 0.3926 | 32300 | 1.1548 | 0.0176 | 0.6749 | 0.3251 | 3.1731 | 3049235.8274 | 2640694 | 1.1547 | | 1.1214 | 0.3938 | 32400 | 1.1542 | 0.0176 | 0.6751 | 0.3249 | 3.1713 | 3047727.5020 | 2640694 | 1.1541 | | 1.1782 | 0.3950 | 32500 | 1.1534 | 0.0176 | 0.6752 | 0.3248 | 3.1688 | 3045634.6729 | 2640694 | 1.1533 | | 1.1698 | 0.3962 | 32600 | 1.1551 | 0.0176 | 0.6748 | 0.3252 | 3.1743 | 3050235.7458 | 2640694 | 1.1551 | | 1.0567 | 0.3974 | 32700 | 1.1538 | 0.0176 | 0.6754 | 0.3246 | 3.1702 | 3046810.1113 | 2640694 | 1.1538 | | 1.1201 | 0.3987 | 32800 | 1.1559 | 0.0176 | 0.6750 | 0.3250 | 3.1768 | 3052310.6689 | 2640694 | 1.1559 | | 1.1509 | 0.3999 | 32900 | 1.1548 | 0.0176 | 0.6749 | 0.3251 | 3.1733 | 3049415.3857 | 2640694 | 1.1548 | | 1.1311 | 0.4011 | 33000 | 1.1548 | 0.0176 | 0.6750 | 0.3250 | 3.1731 | 3049254.6797 | 2640694 | 1.1547 | | 1.2332 | 0.4023 | 33100 | 1.1537 | 0.0176 | 0.6754 | 0.3246 | 3.1696 | 3046294.3992 | 2640694 | 1.1536 | | 1.1885 | 0.4035 | 33200 | 1.1547 | 0.0176 | 0.6753 | 0.3247 | 3.1729 | 3049087.7129 | 2640694 | 1.1547 | | 1.2373 | 0.4047 | 33300 | 1.1553 | 0.0176 | 0.6747 | 0.3253 | 3.1748 | 3050686.5786 | 2640694 | 1.1553 | | 1.1077 | 0.4060 | 33400 | 1.1552 | 0.0176 | 0.6750 | 0.3250 | 3.1745 | 3050357.2617 | 2640694 | 1.1551 | | 1.175 | 0.4072 | 33500 | 1.1536 | 0.0176 | 0.6750 | 0.3250 | 3.1693 | 3046109.8799 | 2640694 | 1.1535 | | 1.2418 | 0.4084 | 33600 | 1.1541 | 0.0176 | 0.6751 | 0.3249 | 3.1710 | 3047481.4873 | 2640694 | 1.1540 | | 1.1381 | 0.4096 | 33700 | 1.1537 | 0.0176 | 0.6751 | 0.3249 | 3.1696 | 3046329.1191 | 2640694 | 1.1536 | | 1.1803 | 0.4108 | 33800 | 1.1545 | 0.0176 | 0.6753 | 0.3247 | 3.1724 | 3048629.6130 | 2640694 | 1.1545 | | 1.1791 | 0.4120 | 33900 | 1.1543 | 0.0176 | 0.6745 | 0.3255 | 3.1715 | 3047891.0708 | 2640694 | 1.1542 | | 1.114 | 0.4132 | 34000 | 1.1534 | 0.0176 | 0.6754 | 0.3246 | 3.1689 | 3045748.1799 | 2640694 | 1.1534 | | 1.1576 | 0.4145 | 34100 | 1.1531 | 0.0176 | 0.6751 | 0.3249 | 3.1679 | 3044873.0642 | 2640694 | 1.1531 | | 1.2083 | 0.4157 | 34200 | 1.1543 | 0.0176 | 0.6752 | 0.3248 | 3.1716 | 3047983.5559 | 2640694 | 1.1542 | | 1.1442 | 0.4169 | 34300 | 1.1566 | 0.0176 | 0.6747 | 0.3253 | 3.1790 | 3054149.7249 | 2640694 | 1.1566 | | 1.0855 | 0.4181 | 34400 | 1.1543 | 0.0176 | 0.6750 | 0.3250 | 3.1716 | 3047969.4722 | 2640694 | 1.1542 | | 1.1477 | 0.4193 | 34500 | 1.1537 | 0.0176 | 0.6751 | 0.3249 | 3.1697 | 3046363.7463 | 2640694 | 1.1536 | | 1.1322 | 0.4205 | 34600 | 1.1538 | 0.0176 | 0.6756 | 0.3244 | 3.1699 | 3046606.1462 | 2640694 | 1.1537 | | 1.1673 | 0.4218 | 34700 | 1.1539 | 0.0176 | 0.6745 | 0.3255 | 3.1704 | 3046969.7253 | 2640694 | 1.1539 | | 1.1164 | 0.4230 | 34800 | 1.1538 | 0.0176 | 0.6752 | 0.3248 | 3.1700 | 3046618.2637 | 2640694 | 1.1537 | | 1.1393 | 0.4242 | 34900 | 1.1547 | 0.0176 | 0.6748 | 0.3252 | 3.1731 | 3049195.5732 | 2640694 | 1.1547 | | 1.1443 | 0.4254 | 35000 | 1.1548 | 0.0176 | 0.6748 | 0.3252 | 3.1732 | 3049308.0654 | 2640694 | 1.1547 | | 1.0902 | 0.4266 | 35100 | 1.1559 | 0.0176 | 0.6748 | 0.3252 | 3.1766 | 3052155.8862 | 2640694 | 1.1558 | | 1.2037 | 0.4278 | 35200 | 1.1542 | 0.0176 | 0.6751 | 0.3249 | 3.1712 | 3047643.6829 | 2640694 | 1.1541 | | 1.1876 | 0.4290 | 35300 | 1.1565 | 0.0176 | 0.6742 | 0.3258 | 3.1787 | 3053928.2310 | 2640694 | 1.1565 | | 1.1434 | 0.4303 | 35400 | 1.1578 | 0.0176 | 0.6742 | 0.3258 | 3.1829 | 3057368.9883 | 2640694 | 1.1578 | | 1.1709 | 0.4315 | 35500 | 1.1542 | 0.0176 | 0.6752 | 0.3248 | 3.1712 | 3047651.5452 | 2640694 | 1.1541 | | 1.1917 | 0.4327 | 35600 | 1.1535 | 0.0176 | 0.6751 | 0.3249 | 3.1690 | 3045782.0125 | 2640694 | 1.1534 | | 1.1701 | 0.4339 | 35700 | 1.1550 | 0.0176 | 0.6749 | 0.3251 | 3.1739 | 3049892.7576 | 2640694 | 1.1550 | | 1.163 | 0.4351 | 35800 | 1.1533 | 0.0176 | 0.6753 | 0.3247 | 3.1685 | 3045360.9780 | 2640694 | 1.1532 | | 1.0956 | 0.4363 | 35900 | 1.1534 | 0.0176 | 0.6750 | 0.3250 | 3.1687 | 3045593.2595 | 2640694 | 1.1533 | | 1.1102 | 0.4376 | 36000 | 1.1526 | 0.0176 | 0.6757 | 0.3243 | 3.1663 | 3043564.9714 | 2640694 | 1.1526 | | 1.1185 | 0.4388 | 36100 | 1.1547 | 0.0176 | 0.6747 | 0.3253 | 3.1728 | 3048965.0251 | 2640694 | 1.1546 | | 1.1451 | 0.4400 | 36200 | 1.1540 | 0.0176 | 0.6755 | 0.3245 | 3.1706 | 3047182.6775 | 2640694 | 1.1539 | | 1.0803 | 0.4412 | 36300 | 1.1541 | 0.0176 | 0.6754 | 0.3246 | 3.1709 | 3047360.4795 | 2640694 | 1.1540 | | 1.0954 | 0.4424 | 36400 | 1.1534 | 0.0176 | 0.6752 | 0.3248 | 3.1688 | 3045689.0464 | 2640694 | 1.1534 | | 1.0929 | 0.4436 | 36500 | 1.1535 | 0.0176 | 0.6751 | 0.3249 | 3.1690 | 3045810.8271 | 2640694 | 1.1534 | | 1.2101 | 0.4448 | 36600 | 1.1546 | 0.0176 | 0.6753 | 0.3247 | 3.1725 | 3048722.2756 | 2640694 | 1.1545 | | 1.1889 | 0.4461 | 36700 | 1.1538 | 0.0176 | 0.6751 | 0.3249 | 3.1699 | 3046571.8616 | 2640694 | 1.1537 | | 1.2623 | 0.4473 | 36800 | 1.1535 | 0.0176 | 0.6749 | 0.3251 | 3.1690 | 3045854.3633 | 2640694 | 1.1534 | | 1.145 | 0.4485 | 36900 | 1.1553 | 0.0176 | 0.6746 | 0.3254 | 3.1748 | 3050654.9963 | 2640694 | 1.1552 | | 1.1335 | 0.4497 | 37000 | 1.1553 | 0.0176 | 0.6752 | 0.3248 | 3.1747 | 3050531.4019 | 2640694 | 1.1552 | | 1.103 | 0.4509 | 37100 | 1.1540 | 0.0176 | 0.6752 | 0.3248 | 3.1706 | 3047168.2080 | 2640694 | 1.1539 | | 1.143 | 0.4521 | 37200 | 1.1531 | 0.0176 | 0.6753 | 0.3247 | 3.1678 | 3044818.4390 | 2640694 | 1.1530 | | 1.1548 | 0.4534 | 37300 | 1.1565 | 0.0176 | 0.6748 | 0.3252 | 3.1787 | 3053895.5125 | 2640694 | 1.1565 | | 1.1669 | 0.4546 | 37400 | 1.1526 | 0.0176 | 0.6758 | 0.3242 | 3.1661 | 3043389.3010 | 2640694 | 1.1525 | | 1.1344 | 0.4558 | 37500 | 1.1531 | 0.0176 | 0.6753 | 0.3247 | 3.1678 | 3044816.0803 | 2640694 | 1.1530 | | 1.0854 | 0.4570 | 37600 | 1.1533 | 0.0176 | 0.6753 | 0.3247 | 3.1685 | 3045374.0093 | 2640694 | 1.1532 | | 1.1953 | 0.4582 | 37700 | 1.1562 | 0.0176 | 0.6745 | 0.3255 | 3.1776 | 3052951.9736 | 2640694 | 1.1561 | | 1.1594 | 0.4594 | 37800 | 1.1534 | 0.0176 | 0.6749 | 0.3251 | 3.1688 | 3045678.6987 | 2640694 | 1.1534 | | 1.1604 | 0.4607 | 37900 | 1.1535 | 0.0176 | 0.6752 | 0.3248 | 3.1690 | 3045857.6108 | 2640694 | 1.1534 | | 1.1942 | 0.4619 | 38000 | 1.1532 | 0.0176 | 0.6754 | 0.3246 | 3.1681 | 3045041.0811 | 2640694 | 1.1531 | | 1.2014 | 0.4631 | 38100 | 1.1545 | 0.0176 | 0.6750 | 0.3250 | 3.1722 | 3048518.4104 | 2640694 | 1.1544 | | 1.1459 | 0.4643 | 38200 | 1.1539 | 0.0176 | 0.6751 | 0.3249 | 3.1702 | 3046843.8743 | 2640694 | 1.1538 | | 1.2446 | 0.4655 | 38300 | 1.1561 | 0.0176 | 0.6743 | 0.3257 | 3.1775 | 3052874.4746 | 2640694 | 1.1561 | | 1.2079 | 0.4667 | 38400 | 1.1540 | 0.0176 | 0.6751 | 0.3249 | 3.1707 | 3047235.7576 | 2640694 | 1.1540 | | 1.1518 | 0.4679 | 38500 | 1.1556 | 0.0176 | 0.6748 | 0.3252 | 3.1759 | 3051553.5430 | 2640694 | 1.1556 | | 1.1068 | 0.4692 | 38600 | 1.1554 | 0.0176 | 0.6748 | 0.3252 | 3.1752 | 3050978.4341 | 2640694 | 1.1554 | | 1.1865 | 0.4704 | 38700 | 1.1532 | 0.0176 | 0.6754 | 0.3246 | 3.1681 | 3045088.0642 | 2640694 | 1.1531 | | 1.1137 | 0.4716 | 38800 | 1.1539 | 0.0176 | 0.6747 | 0.3253 | 3.1704 | 3047016.3262 | 2640694 | 1.1539 | | 1.1376 | 0.4728 | 38900 | 1.1551 | 0.0176 | 0.6748 | 0.3252 | 3.1740 | 3050021.7695 | 2640694 | 1.1550 | | 1.1042 | 0.4740 | 39000 | 1.1567 | 0.0176 | 0.6747 | 0.3253 | 3.1792 | 3054330.0503 | 2640694 | 1.1566 | | 1.1393 | 0.4752 | 39100 | 1.1530 | 0.0176 | 0.6755 | 0.3245 | 3.1676 | 3044655.7695 | 2640694 | 1.1530 | | 1.104 | 0.4765 | 39200 | 1.1539 | 0.0176 | 0.6752 | 0.3248 | 3.1702 | 3046856.4084 | 2640694 | 1.1538 | | 1.2001 | 0.4777 | 39300 | 1.1535 | 0.0176 | 0.6754 | 0.3246 | 3.1691 | 3045903.3633 | 2640694 | 1.1534 | | 1.1715 | 0.4789 | 39400 | 1.1538 | 0.0176 | 0.6753 | 0.3247 | 3.1701 | 3046774.9629 | 2640694 | 1.1538 | | 1.0788 | 0.4801 | 39500 | 1.1535 | 0.0176 | 0.6751 | 0.3249 | 3.1690 | 3045811.5151 | 2640694 | 1.1534 | | 1.2 | 0.4813 | 39600 | 1.1529 | 0.0176 | 0.6755 | 0.3245 | 3.1670 | 3044192.1245 | 2640694 | 1.1528 | | 1.2018 | 0.4825 | 39700 | 1.1553 | 0.0176 | 0.6750 | 0.3250 | 3.1749 | 3050733.3479 | 2640694 | 1.1553 | | 1.1214 | 0.4837 | 39800 | 1.1528 | 0.0176 | 0.6755 | 0.3245 | 3.1667 | 3043880.1357 | 2640694 | 1.1527 | | 1.1442 | 0.4850 | 39900 | 1.1538 | 0.0176 | 0.6751 | 0.3249 | 3.1699 | 3046583.2532 | 2640694 | 1.1537 | | 1.0875 | 0.4862 | 40000 | 1.1526 | 0.0176 | 0.6753 | 0.3247 | 3.1663 | 3043533.7446 | 2640694 | 1.1526 | | 1.1224 | 0.4874 | 40100 | 1.1552 | 0.0176 | 0.6747 | 0.3253 | 3.1743 | 3050263.1567 | 2640694 | 1.1551 | | 1.2342 | 0.4886 | 40200 | 1.1543 | 0.0176 | 0.6749 | 0.3251 | 3.1716 | 3047971.9673 | 2640694 | 1.1542 | | 1.1109 | 0.4898 | 40300 | 1.1536 | 0.0176 | 0.6752 | 0.3248 | 3.1695 | 3046235.8962 | 2640694 | 1.1536 | | 1.1289 | 0.4910 | 40400 | 1.1536 | 0.0176 | 0.6751 | 0.3249 | 3.1693 | 3046075.9072 | 2640694 | 1.1535 | ### Framework versions - PEFT 0.15.2 - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
moscowx21/Qwen3-0.6B-Gensyn-Swarm-giant_pale_ferret
moscowx21
2025-08-13T11:02:24Z
52
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am giant_pale_ferret", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-11T13:55:33Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am giant_pale_ferret --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
datasetsANDmodels/zh2en
datasetsANDmodels
2025-08-13T11:01:14Z
0
0
null
[ "pytorch", "marian", "en", "zh", "base_model:datasetsANDmodels/zh2en", "base_model:finetune:datasetsANDmodels/zh2en", "region:us" ]
null
2025-08-13T10:55:45Z
--- language: - en - zh base_model: - datasetsANDmodels/zh2en --- This model translates text from Chinese to English
Luiiiiiigi/DEFENDER_OCTA
Luiiiiiigi
2025-08-13T11:00:13Z
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:apache-2.0", "region:us" ]
text-to-image
2025-08-13T11:00:05Z
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - output: url: images/weavy-Flux Dev LoRA-2025-08-13 at 11.59.42.png text: DEFENDER_OCTA on a road base_model: black-forest-labs/FLUX.1-dev instance_prompt: DEFENDER_OCTA license: apache-2.0 --- # DEFENDER_OCTA <Gallery /> ## Trigger words You should use `DEFENDER_OCTA` to trigger the image generation. ## Download model [Download](/Luiiiiiigi/DEFENDER_OCTA/tree/main) them in the Files & versions tab.
indoempatnol/blockassist-bc-fishy_wary_swan_1755080921
indoempatnol
2025-08-13T10:56:36Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "fishy wary swan", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T10:56:29Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - fishy wary swan --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Coaster41/patchtst-sae-32
Coaster41
2025-08-13T10:55:37Z
0
0
saelens
[ "saelens", "region:us" ]
null
2025-08-13T10:55:32Z
--- library_name: saelens --- # SAEs for use with the SAELens library This repository contains the following SAEs: - blocks.0.hook_mlp_out Load these SAEs using SAELens as below: ```python from sae_lens import SAE sae = SAE.from_pretrained("Coaster41/patchtst-sae-32", "<sae_id>") ```
Bisher/testclartts_train_whisper-large-v3-turbo_merged
Bisher
2025-08-13T10:53:43Z
0
0
transformers
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "text-generation-inference", "unsloth", "en", "base_model:unsloth/whisper-large-v3-turbo", "base_model:finetune:unsloth/whisper-large-v3-turbo", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2025-08-13T10:52:54Z
--- base_model: unsloth/whisper-large-v3-turbo tags: - text-generation-inference - transformers - unsloth - whisper license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** Bisher - **License:** apache-2.0 - **Finetuned from model :** unsloth/whisper-large-v3-turbo This whisper model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
finhdev/testmobileclip
finhdev
2025-08-13T10:52:01Z
119
0
mobileclip
[ "mobileclip", "license:apple-amlr", "endpoints_compatible", "region:us" ]
null
2025-07-29T19:48:27Z
--- license: apple-amlr license_name: apple-ascl license_link: https://github.com/apple/ml-mobileclip/blob/main/LICENSE_weights_data library_name: mobileclip --- # 📸 MobileCLIP-B Zero-Shot Image Classifier ### Hugging Face Inference Endpoint > **Production-ready wrapper** around Apple’s MobileCLIP-B checkpoint. > Handles image → text similarity in a single fast call. --- ## 📑 Sidebar - [Features](#-features) - [Repository layout](#-repository-layout) - [Quick start (local smoke-test)](#-quick-start-local-smoke-test) - [Calling the deployed endpoint](#-calling-the-deployed-endpoint) - [How it works](#-how-it-works) - [Updating the label set](#-updating-the-label-set) - [License](#-license) --- ## ✨ Features | | This repo | |------------------------------|-----------| | **Model** | MobileCLIP-B (`datacompdr` checkpoint) | | **Branch fusion** | `reparameterize_model` baked in | | **Mixed-precision** | FP16 on GPU, FP32 on CPU | | **Pre-computed text feats** | One-time encoding of prompts in `items.json` | | **Per-request work** | _Only_ image decoding → encode_image → softmax | | **Latency (A10G)** | < 30 ms once the image arrives | --- ## 📁 Repository layout | Path | Purpose | |--------------------|------------------------------------------------------------------| | `handler.py` | HF entry-point (loads model + text cache, serves requests) | | `reparam.py` | 60-line stand-alone copy of Apple’s `reparameterize_model` | | `requirements.txt` | Minimal dep set (`torch`, `torchvision`, `open-clip-torch`) | | `items.json` | Your label set (`id`, `name`, `prompt` per line) | | `README.md` | This document | --- ## 🚀 Quick start (local smoke-test) ```bash python -m venv venv && source venv/bin/activate pip install -r requirements.txt python - <<'PY' import base64, json, handler, pathlib app = handler.EndpointHandler() img_b64 = base64.b64encode(pathlib.Path("tests/cat.jpg").read_bytes()).decode() print(app({"inputs": {"image": img_b64}})[:5]) # top-5 classes PY ``` --- ## 🌐 Calling the deployed endpoint ```bash ENDPOINT="https://<your-endpoint>.aws.endpoints.huggingface.cloud" TOKEN="hf_xxxxxxxxxxxxxxxxx" IMG="cat.jpg" python - <<'PY' import base64, json, os, requests, sys url = os.environ["ENDPOINT"] token = os.environ["TOKEN"] img = sys.argv payload = { "inputs": { "image": base64.b64encode(open(img, "rb").read()).decode() } } resp = requests.post( url, headers={ "Authorization": f"Bearer {token}", "Content-Type": "application/json", "Accept": "application/json", }, json=payload, timeout=60, ) print(json.dumps(resp.json()[:5], indent=2)) PY $IMG ``` *Response example* ```json [ { "id": 23, "label": "cat", "score": 0.92 }, { "id": 11, "label": "tiger cat", "score": 0.05 }, { "id": 48, "label": "siamese cat", "score": 0.02 } ] ``` --- ## ⚙️ How it works 1. **Startup (runs once per replica)** * Downloads / loads MobileCLIP-B (`datacompdr`). * Fuses MobileOne branches via `reparam.py`. * Reads `items.json` and encodes every prompt → `[N,512]` tensor. 2. **Per request** * Decodes base-64 JPEG/PNG. * Applies OpenCLIP preprocessing (224 × 224 center-crop + normalise). * Encodes the image, normalises, computes cosine similarity vs. cached text matrix. * Returns sorted `[{id, label, score}, …]`. --- ## 🔄 Updating the label set Simply edit `items.json`, push, and redeploy. ```json [ { "id": 0, "name": "cat", "prompt": "a photo of a cat" }, { "id": 1, "name": "dog", "prompt": "a photo of a dog" } ] ``` No code changes are required; the handler re-encodes prompts at start-up. --- ## ⚖️ License * **Weights / data** — Apple AMLR (see [`LICENSE_weights_data`](./LICENSE_weights_data)) * **This wrapper code** — MIT --- <div align="center"><sub>Maintained with ❤️ by Your-Team — Aug 2025</sub></div>
PhongInk/blockassist-bc-stinky_thorny_zebra_1755072512
PhongInk
2025-08-13T10:51:14Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "stinky thorny zebra", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T10:50:56Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - stinky thorny zebra --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
DungND1107/legal-qwen-adapter
DungND1107
2025-08-13T10:50:46Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:nqdhocai/VLSP2025-LegalSML-qwen3-1.7b-legal-pretrain", "base_model:adapter:nqdhocai/VLSP2025-LegalSML-qwen3-1.7b-legal-pretrain", "region:us" ]
null
2025-08-13T10:50:41Z
--- base_model: nqdhocai/VLSP2025-LegalSML-qwen3-1.7b-legal-pretrain library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.2
xbounty/Qwen3-0.6B-Gensyn-Swarm-feline_feline_eagle
xbounty
2025-08-13T10:50:30Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am feline_feline_eagle", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-13T10:49:56Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am feline_feline_eagle --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
John6666/stein-illustrious-v14-sdxl
John6666
2025-08-13T10:47:36Z
0
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "anime", "hentai", "furry", "character design", "digital art", "illustration", "vibrant", "versatile", "creative", "composition", "poses", "prompt interpretation", "illustrious", "en", "base_model:OnomaAIResearch/Illustrious-xl-early-release-v0", "base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2025-08-13T10:39:45Z
--- license: other license_name: faipl-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ language: - en library_name: diffusers pipeline_tag: text-to-image tags: - text-to-image - stable-diffusion - stable-diffusion-xl - anime - hentai - furry - character design - digital art - illustration - vibrant - versatile - creative - composition - poses - prompt interpretation - illustrious base_model: OnomaAIResearch/Illustrious-xl-early-release-v0 --- Original model is [here](https://civitai.com/models/1078146/steinillustrious?modelVersionId=2109032). This model created by [AI_nstein](https://civitai.com/user/AI_nstein).
mohammadmahdinouri/mol-trained
mohammadmahdinouri
2025-08-13T10:46:30Z
3
0
transformers
[ "transformers", "pytorch", "safetensors", "ModernALBERT_MoL", "fill-mask", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2025-08-12T00:46:00Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
acidjp/blockassist-bc-pesty_extinct_prawn_1755081345
acidjp
2025-08-13T10:42:36Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "pesty extinct prawn", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T10:41:56Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - pesty extinct prawn --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ricodr/blockassist-bc-twitchy_toothy_clam_1755081528
ricodr
2025-08-13T10:39:27Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "twitchy toothy clam", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T10:39:21Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - twitchy toothy clam --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
motza0025/blockassist-bc-furry_graceful_monkey_1755079938
motza0025
2025-08-13T10:37:10Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "furry graceful monkey", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T10:36:44Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - furry graceful monkey --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
haihp02/cc3a68cb-1f81-4eb0-94ff-cab39c7dcd6d
haihp02
2025-08-13T10:36:45Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-08-13T10:36:37Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
lautan/blockassist-bc-gentle_patterned_goat_1755079820
lautan
2025-08-13T10:35:56Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "gentle patterned goat", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T10:35:52Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - gentle patterned goat --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
BootesVoid/cme75t9gj0apf6aq1znlqjzph_cme8n79vn01yrrts8kszrbbye
BootesVoid
2025-08-13T10:33:51Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-08-13T10:33:49Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: AVACALLAHAN --- # Cme75T9Gj0Apf6Aq1Znlqjzph_Cme8N79Vn01Yrrts8Kszrbbye <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `AVACALLAHAN` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "AVACALLAHAN", "lora_weights": "https://huggingface.co/BootesVoid/cme75t9gj0apf6aq1znlqjzph_cme8n79vn01yrrts8kszrbbye/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('BootesVoid/cme75t9gj0apf6aq1znlqjzph_cme8n79vn01yrrts8kszrbbye', weight_name='lora.safetensors') image = pipeline('AVACALLAHAN').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/BootesVoid/cme75t9gj0apf6aq1znlqjzph_cme8n79vn01yrrts8kszrbbye/discussions) to add images that show off what you’ve made with this LoRA.
acidjp/blockassist-bc-pesty_extinct_prawn_1755080549
acidjp
2025-08-13T10:33:00Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "pesty extinct prawn", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T10:28:39Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - pesty extinct prawn --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
afroneko/blockassist-bc-yawning_melodic_starfish_1755081045
afroneko
2025-08-13T10:32:47Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "yawning melodic starfish", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T10:32:24Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - yawning melodic starfish --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
phospho-app/Selinaliu1030-gr00t-egg_0806-heuf6
phospho-app
2025-08-13T10:31:35Z
0
0
phosphobot
[ "phosphobot", "gr00t", "robotics", "dataset:Selinaliu1030/egg_0806", "region:us" ]
robotics
2025-08-13T09:23:55Z
--- datasets: Selinaliu1030/egg_0806 library_name: phosphobot pipeline_tag: robotics model_name: gr00t tags: - phosphobot - gr00t task_categories: - robotics --- # gr00t Model - phospho Training Pipeline ## Error Traceback We faced an issue while training your model. ``` Traceback (most recent call last): File "/opt/conda/lib/python3.11/asyncio/tasks.py", line 500, in wait_for return fut.result() ^^^^^^^^^^^^ File "/root/phosphobot/am/gr00t.py", line 1130, in read_output async for line in process.stdout: File "/opt/conda/lib/python3.11/asyncio/streams.py", line 765, in __anext__ val = await self.readline() ^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/asyncio/streams.py", line 566, in readline line = await self.readuntil(sep) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/asyncio/streams.py", line 658, in readuntil await self._wait_for_data('readuntil') File "/opt/conda/lib/python3.11/asyncio/streams.py", line 543, in _wait_for_data await self._waiter asyncio.exceptions.CancelledError The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/root/phosphobot/am/gr00t.py", line 1141, in run_gr00t_training await asyncio.wait_for(read_output(), timeout=timeout_seconds) File "/opt/conda/lib/python3.11/asyncio/tasks.py", line 502, in wait_for raise exceptions.TimeoutError() from exc TimeoutError During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/root/src/helper.py", line 166, in predict trainer.train(timeout_seconds=timeout_seconds) File "/root/phosphobot/am/gr00t.py", line 1309, in train asyncio.run( File "/opt/conda/lib/python3.11/asyncio/runners.py", line 190, in run return runner.run(main) ^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/asyncio/runners.py", line 118, in run return self._loop.run_until_complete(task) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/lib/python3.11/asyncio/base_events.py", line 654, in run_until_complete return future.result() ^^^^^^^^^^^^^^^ File "/root/phosphobot/am/gr00t.py", line 1146, in run_gr00t_training raise TimeoutError( TimeoutError: Training process exceeded timeout of 3600 seconds. Please consider lowering the number of epochs and/or batch size. ``` ## Training parameters: - **Dataset**: [Selinaliu1030/egg_0806](https://huggingface.co/datasets/Selinaliu1030/egg_0806) - **Wandb run URL**: None - **Epochs**: 10 - **Batch size**: 8 - **Training steps**: None 📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme) 🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
Sayemahsjn/blockassist-bc-playful_feline_octopus_1755079961
Sayemahsjn
2025-08-13T10:29:58Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "playful feline octopus", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T10:29:54Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - playful feline octopus --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
stepfun-ai/StepFun-Prover-Preview-32B
stepfun-ai
2025-08-13T10:28:47Z
0
2
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "chat", "conversational", "en", "base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-32B", "base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Qwen-32B", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-13T08:26:04Z
--- language: - en base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-32B tags: - chat library_name: transformers license: apache-2.0 --- # StepFun-Prover-Preview-32B **StepFun-Prover-Preview-32B** is a theorem proving model developed by StepFun Team. It can iteratively refine the proof sketch via interacting with Lean4, and achieve 70.0% accuracy with Pass@1 on MiniF2F-test. Advanced usage examples can be seen in [github](https://github.com/stepfun-ai/StepFun-Prover-Preview). # Quick Start with vLLM ```python from vllm import LLM, SamplingParams from transformers import AutoTokenizer model_name = "Stepfun/Stepfun-Prover-Preview-32B" model = LLM( model=model_name, tensor_parallel_size=8, ) tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) formal_problem = """ import Mathlib theorem test_theorem (x y z : ℝ) (hx : 0 < x) (hy : 0 < y) (hz : 0 < z) : (x^2 - z^2) / (y + z) + (y^2 - x^2) / (z + x) + (z^2 - y^2) / (x + y) ≥ 0 := by """.strip() system_prompt = "You will be given an unsolved Lean 4 problem. Think carefully and work towards a solution. At any point, you may use the Lean 4 REPL to check your progress by enclosing your partial solution between <sketch> and </sketch>. The REPL feedback will be provided between <REPL> and </REPL>. Continue this process as needed until you arrive at a complete and correct solution." user_prompt = f"```lean4\n{formal_problem}\n```" dialog = [ {"role": "system", "content": system_prompt}, {"role": "user", "content": user_prompt} ] prompt = tokenizer.apply_chat_template(dialog, tokenize=False, add_generation_prompt=True) sampling_params = SamplingParams( temperature=0.999, top_p=0.95, top_k=-1, max_tokens=16384, stop_token_ids=[151643, 151666], # <|end▁of▁sentence|>, </sketch> include_stop_str_in_output=True, ) output = model.generate(prompt, sampling_params=sampling_params) output_text = output[0].outputs[0].text print(output_text) ```
Sephrenia108/flux_christine
Sephrenia108
2025-08-13T10:28:15Z
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-08-13T09:54:48Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: ChristineLinkedIn --- # Flux_Christine <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `ChristineLinkedIn` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "ChristineLinkedIn", "lora_weights": "https://huggingface.co/Sephrenia108/flux_christine/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('Sephrenia108/flux_christine', weight_name='lora.safetensors') image = pipeline('ChristineLinkedIn').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 1915 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/Sephrenia108/flux_christine/discussions) to add images that show off what you’ve made with this LoRA.
mang3dd/blockassist-bc-tangled_slithering_alligator_1755079309
mang3dd
2025-08-13T10:28:13Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "tangled slithering alligator", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T10:28:09Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - tangled slithering alligator --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Nullifier00/blockassist-bc-slimy_lanky_bison_1755079391
Nullifier00
2025-08-13T10:26:43Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "slimy lanky bison", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T10:26:32Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - slimy lanky bison --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
indoempatnol/blockassist-bc-fishy_wary_swan_1755079093
indoempatnol
2025-08-13T10:25:26Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "fishy wary swan", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T10:25:20Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - fishy wary swan --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
BaoLocTown/Seed-X-PPO-7B-FP8-Dynamic
BaoLocTown
2025-08-13T10:24:42Z
0
0
null
[ "safetensors", "mistral", "translation", "en", "zh", "de", "fr", "es", "ar", "pt", "it", "ru", "id", "ja", "ko", "ms", "ro", "th", "vi", "uk", "tr", "cs", "nl", "pl", "sv", "da", "fi", "hr", "hu", "no", "nb", "dataset:facebook/flores", "dataset:google/wmt24pp", "arxiv:2507.13618", "license:other", "compressed-tensors", "region:us" ]
translation
2025-08-13T09:26:27Z
--- license: other license_name: openmdw license_link: LICENSE datasets: - facebook/flores - google/wmt24pp language: - en - zh - de - fr - es - ar - pt - it - ru - id - ja - ko - ms - ro - th - vi - uk - tr - cs - nl - pl - sv - da - fi - hr - hu - 'no' - nb metrics: - bleurt - comet pipeline_tag: translation --- # Seed-X-PPO-7B <a href="https://arxiv.org/pdf/2507.13618"> <img src="https://img.shields.io/badge/Seed--X-Report-blue"></a> <a href="https://huggingface.co/collections/ByteDance-Seed/seed-x-6878753f2858bc17afa78543"> <img src="https://img.shields.io/badge/Seed--X-Hugging Face-brightgreen"></a> <a href="https://huggingface.co/spaces/ByteDance-Seed/Seed-X"> <img src="https://img.shields.io/badge/Seed--X-DEMO-purple"></a> <a href="https://github.com/ByteDance-Seed/Seed-X-7B/blob/main/LICENSE.openmdw"> <img src="https://img.shields.io/badge/License-OpenMDW-yellow"></a> ## Introduction We are excited to introduce **Seed-X**, a powerful series of open-source multilingual translation language models, including an instruction model, a reinforcement learning model, and a reward model. It pushes the boundaries of translation capabilities within 7 billion parameters. We develop Seed-X as an accessible, off-the-shelf tool to support the community in advancing translation research and applications: * **Exceptional translation capabilities**: Seed-X exhibits state-of-the-art translation capabilities, on par with or outperforming ultra-large models like Gemini-2.5, Claude-3.5, and GPT-4, as validated by human evaluations and automatic metrics. * **Deployment and inference-friendly**: With a compact 7B parameter count and mistral architecture, Seed-X offers outstanding translation performance in a lightweight and efficient package, ideal for deployment and inference. * **Broad domain coverage**: Seed-X excels on a highly challenging translation test set spanning diverse domains, including the internet, science and technology, office dialogues, e-commerce, biomedicine, finance, law, literature, and entertainment. ![performance](imgs/model_comparsion.png) This repo contains the **Seed-X-PPO** model, with the following features: * Type: Causal language models * Training Stage: Pretraining & Post-training * Support: Multilingual translation among 28 languages (We recommend using Seed-X-PPO model, as its translation performance is superior to Seed-X-Instruct.) | Languages | Abbr. | Languages | Abbr. | Languages | Abbr. | Languages | Abbr. | | ----------- | ----------- |-----------|-----------|-----------|-----------| -----------|-----------| |Arabic | ar |French | fr | Malay | ms | Russian | ru | |Czech | cs |Croatian | hr | Norwegian Bokmal | nb | Swedish | sv | |Danish | da |Hungarian | hu | Dutch | nl | Thai | th | |German | de |Indonesian | id | Norwegian | no | Turkish | tr | |English | en |Italian | it | Polish | pl | Ukrainian | uk | |Spanish | es |Japanese | ja | Portuguese | pt | Vietnamese | vi | |Finnish | fi |Korean | ko | Romanian | ro | Chinese | zh | ## Model Downloads | Model Name | Description | Download | | ----------- | ----------- |----------- | Seed-X-Instruct | Instruction-tuned for alignment with user intent. |🤗 [Model](https://huggingface.co/ByteDance-Seed/Seed-X-Instruct-7B)| | Seed-X-PPO | RL trained to boost translation capabilities. |🤗 [Model](https://huggingface.co/ByteDance-Seed/Seed-X-PPO-7B)| | 👉 **Seed-X-PPO-GPTQ-FP8** | Quantization: FP8. | 🤗 [Model](https://huggingface.co/BaoLocTown/Seed-X-PPO-7B-FP8-Dynamic)| | Seed-X-PPO-GPTQ-Int8 | Quantization: GPTQ 8-bit. | 🤗 [Model](https://huggingface.co/ByteDance-Seed/Seed-X-PPO-7B-GPTQ-Int8)| | Seed-X-PPO-AWQ-Int4 | Quantization: AWQ 4-bit. | 🤗 [Model](https://huggingface.co/ByteDance-Seed/Seed-X-PPO-7B-AWQ-Int4)| |Seed-X-RM | Reward model to evaluate the quality of translation.| 🤗 [Model](https://huggingface.co/ByteDance-Seed/Seed-X-RM-7B)| ## Quickstart 📮 **Notice** * **The language tags at the end of the prompt is necessary**, which are used in PPO training. For example, when the target language is German, \<de\> needs to be added. You can refer to the above table for language abbreviations. * **This model is specialized in multilingual translation**, which is unexpected to support other tasks. * **We don't have any chat template**, thus you don't have to perform ```tokenizer.apply_chat_template```. Please avoid prompting the model in a multi-round conversation format. * **We recommend against using unofficial quantized versions for local deployment.** We will soon release an official quantized model and develop a demo on Hugging Face Space. Here is a simple example demonstrating how to load the model and perform translation using ```vllm``` Recommended:```vllm==0.8.0, transformers==4.51.3``` ```python from vllm import LLM, SamplingParams, BeamSearchParams model_path = "./ByteDance-Seed/Seed-X-PPO-7B" model = LLM(model=model_path, max_num_seqs=512, tensor_parallel_size=8, enable_prefix_caching=True, gpu_memory_utilization=0.95) messages = [ # without CoT "Translate the following English sentence into Chinese:\nMay the force be with you <zh>", # with CoT "Translate the following English sentence into Chinese and explain it in detail:\nMay the force be with you <zh>" ] # Beam search (We recommend using beam search decoding) decoding_params = BeamSearchParams(beam_width=4, max_tokens=512) # Greedy decoding decoding_params = SamplingParams(temperature=0, max_tokens=512, skip_special_tokens=True) results = model.generate(messages, decoding_params) responses = [res.outputs[0].text.strip() for res in results] print(responses) ``` ## Evaluation We evaluated Seed-X on a diverse set of translation benchmarks, including FLORES-200, WMT-25, and a publicly released [challenge set](https://github.com/ByteDance-Seed/Seed-X-7B/tree/main/challenge_set) accompanied by human evaluations. ![humen_eval](imgs/humen_eval.png) For detailed benchmark results and analysis, please refer to our [Technical Report](https://arxiv.org/pdf/2507.13618). ## License This project is licensed under OpenMDW. See the [LICENSE](https://github.com/ByteDance-Seed/Seed-X-7B/blob/main/LICENSE.openmdw) file for details. ## Citation If you find Seed-X useful for your research and applications, feel free to give us a star ⭐ or cite us using: ```bibtex @misc{cheng2025seedxbuildingstrongmultilingual, title={Seed-X: Building Strong Multilingual Translation LLM with 7B Parameters}, author={Shanbo Cheng and Yu Bao and Qian Cao and Luyang Huang and Liyan Kang and Zhicheng Liu and Yu Lu and Wenhao Zhu and Jingwen Chen and Zhichao Huang and Tao Li and Yifu Li and Huiying Lin and Sitong Liu and Ningxin Peng and Shuaijie She and Lu Xu and Nuo Xu and Sen Yang and Runsheng Yu and Yiming Yu and Liehao Zou and Hang Li and Lu Lu and Yuxuan Wang and Yonghui Wu}, year={2025}, eprint={2507.13618}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2507.13618}, } ```
Lihour21/llama-ollama
Lihour21
2025-08-13T10:21:47Z
0
0
null
[ "pytorch", "llama", "license:apache-2.0", "region:us" ]
null
2025-08-13T09:38:19Z
--- license: apache-2.0 ---
demonwizard0/affine-kkk
demonwizard0
2025-08-13T10:21:31Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-13T10:19:46Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
furkangocuk/detr_augmented_50_epochs
furkangocuk
2025-08-13T10:16:09Z
0
0
transformers
[ "transformers", "safetensors", "detr", "object-detection", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
object-detection
2025-08-12T18:33:35Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
afroneko/blockassist-bc-yawning_melodic_starfish_1755079603
afroneko
2025-08-13T10:08:27Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "yawning melodic starfish", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T10:08:08Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - yawning melodic starfish --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Caesarisnotasalad/vision__openai__clip-vit-base-patch32__onnx
Caesarisnotasalad
2025-08-13T10:07:31Z
0
0
transformers
[ "transformers", "safetensors", "clip_vision_model", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-08-13T10:03:25Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
flair/ner-french
flair
2025-08-13T10:07:26Z
336,266
14
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "fr", "dataset:conll2003", "region:us" ]
token-classification
2022-03-02T23:29:05Z
--- tags: - flair - token-classification - sequence-tagger-model language: fr datasets: - conll2003 widget: - text: "George Washington est allé à Washington" --- ## French NER in Flair (default model) This is the standard 4-class NER model for French that ships with [Flair](https://github.com/flairNLP/flair/). F1-Score: **90,61** (WikiNER) Predicts 4 tags: | **tag** | **meaning** | |---------------------------------|-----------| | PER | person name | | LOC | location name | | ORG | organization name | | MISC | other name | Based on [Flair embeddings](https://www.aclweb.org/anthology/C18-1139/) and LSTM-CRF. --- ### Demo: How to use in Flair Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`) ```python from flair.data import Sentence from flair.models import SequenceTagger # load tagger tagger = SequenceTagger.load("flair/ner-french") # make example sentence sentence = Sentence("George Washington est allé à Washington") # predict NER tags tagger.predict(sentence) # print sentence print(sentence) # print predicted NER spans print('The following NER tags are found:') # iterate over entities and print for entity in sentence.get_spans('ner'): print(entity) ``` This yields the following output: ``` Span [1,2]: "George Washington" [− Labels: PER (0.7394)] Span [6]: "Washington" [− Labels: LOC (0.9161)] ``` So, the entities "*George Washington*" (labeled as a **person**) and "*Washington*" (labeled as a **location**) are found in the sentence "*George Washington est allé à Washington*". --- ### Training: Script to train this model The following Flair script was used to train this model: ```python from flair.data import Corpus from flair.datasets import NER_MULTI_WIKINER from flair.embeddings import WordEmbeddings, StackedEmbeddings, FlairEmbeddings # 1. get the corpus corpus: Corpus = NER_MULTI_WIKINER(languages="fr") # 2. what tag do we want to predict? tag_type = 'ner' # 3. make the tag dictionary from the corpus tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type) # 4. initialize each embedding we use embedding_types = [ # GloVe embeddings WordEmbeddings('fr'), # contextual string embeddings, forward FlairEmbeddings('fr-forward'), # contextual string embeddings, backward FlairEmbeddings('fr-backward'), ] # embedding stack consists of Flair and GloVe embeddings embeddings = StackedEmbeddings(embeddings=embedding_types) # 5. initialize sequence tagger from flair.models import SequenceTagger tagger = SequenceTagger(hidden_size=256, embeddings=embeddings, tag_dictionary=tag_dictionary, tag_type=tag_type) # 6. initialize trainer from flair.trainers import ModelTrainer trainer = ModelTrainer(tagger, corpus) # 7. run training trainer.train('resources/taggers/ner-french', train_with_dev=True, max_epochs=150) ``` --- ### Cite Please cite the following paper when using this model. ``` @inproceedings{akbik2018coling, title={Contextual String Embeddings for Sequence Labeling}, author={Akbik, Alan and Blythe, Duncan and Vollgraf, Roland}, booktitle = {{COLING} 2018, 27th International Conference on Computational Linguistics}, pages = {1638--1649}, year = {2018} } ``` --- ### Issues? The Flair issue tracker is available [here](https://github.com/flairNLP/flair/issues/).
Said9321/Meta-Llama-3.1-8B-Instruct-muscat-1-test-60steps-LORA
Said9321
2025-08-13T10:07:11Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-08-13T10:06:55Z
--- base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Said9321 - **License:** apache-2.0 - **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Pralod/lora-alldatasetv01-qwen2.5-14B-ntc-healthcare-chatbot
Pralod
2025-08-13T10:06:59Z
0
0
peft
[ "peft", "safetensors", "base_model:adapter:unsloth/Qwen2.5-14B-Instruct", "lora", "sft", "transformers", "trl", "unsloth", "text-generation", "conversational", "arxiv:1910.09700", "base_model:unsloth/Qwen2.5-14B-Instruct", "region:us" ]
text-generation
2025-08-13T10:01:45Z
--- base_model: unsloth/Qwen2.5-14B-Instruct library_name: peft pipeline_tag: text-generation tags: - base_model:adapter:unsloth/Qwen2.5-14B-Instruct - lora - sft - transformers - trl - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.17.0
lautan/blockassist-bc-gentle_patterned_goat_1755078083
lautan
2025-08-13T10:06:58Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "gentle patterned goat", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T10:06:54Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - gentle patterned goat --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
DimaSK1/Qwen2-1.5B-bnb-4bit_kl_sft_opt_4
DimaSK1
2025-08-13T10:06:25Z
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "unsloth", "sft", "trl", "base_model:unsloth/Qwen2-1.5B-bnb-4bit", "base_model:finetune:unsloth/Qwen2-1.5B-bnb-4bit", "endpoints_compatible", "region:us" ]
null
2025-08-13T10:06:20Z
--- base_model: unsloth/Qwen2-1.5B-bnb-4bit library_name: transformers model_name: Qwen2-1.5B-bnb-4bit_kl_sft_opt_4 tags: - generated_from_trainer - unsloth - sft - trl licence: license --- # Model Card for Qwen2-1.5B-bnb-4bit_kl_sft_opt_4 This model is a fine-tuned version of [unsloth/Qwen2-1.5B-bnb-4bit](https://huggingface.co/unsloth/Qwen2-1.5B-bnb-4bit). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="DimaSK1/Qwen2-1.5B-bnb-4bit_kl_sft_opt_4", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.21.0 - Transformers: 4.55.1 - Pytorch: 2.7.1 - Datasets: 3.6.0 - Tokenizers: 0.21.2 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
dorka9/gemma-3n-E2B-football-scout-2ep-merged-Q4_K_M-GGUF
dorka9
2025-08-13T10:06:19Z
0
0
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "gemma3n", "llama-cpp", "gguf-my-repo", "en", "base_model:dorka9/gemma-3n-E2B-football-scout-2ep-merged", "base_model:quantized:dorka9/gemma-3n-E2B-football-scout-2ep-merged", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-08-13T10:06:03Z
--- base_model: dorka9/gemma-3n-E2B-football-scout-2ep-merged tags: - text-generation-inference - transformers - unsloth - gemma3n - llama-cpp - gguf-my-repo license: apache-2.0 language: - en --- # dorka9/gemma-3n-E2B-football-scout-2ep-merged-Q4_K_M-GGUF This model was converted to GGUF format from [`dorka9/gemma-3n-E2B-football-scout-2ep-merged`](https://huggingface.co/dorka9/gemma-3n-E2B-football-scout-2ep-merged) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/dorka9/gemma-3n-E2B-football-scout-2ep-merged) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo dorka9/gemma-3n-E2B-football-scout-2ep-merged-Q4_K_M-GGUF --hf-file gemma-3n-e2b-football-scout-2ep-merged-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo dorka9/gemma-3n-E2B-football-scout-2ep-merged-Q4_K_M-GGUF --hf-file gemma-3n-e2b-football-scout-2ep-merged-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo dorka9/gemma-3n-E2B-football-scout-2ep-merged-Q4_K_M-GGUF --hf-file gemma-3n-e2b-football-scout-2ep-merged-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo dorka9/gemma-3n-E2B-football-scout-2ep-merged-Q4_K_M-GGUF --hf-file gemma-3n-e2b-football-scout-2ep-merged-q4_k_m.gguf -c 2048 ```
Sayemahsjn/blockassist-bc-playful_feline_octopus_1755078258
Sayemahsjn
2025-08-13T10:02:12Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "playful feline octopus", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T10:02:07Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - playful feline octopus --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
koloni/blockassist-bc-deadly_graceful_stingray_1755077634
koloni
2025-08-13T10:01:27Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "deadly graceful stingray", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T10:01:23Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - deadly graceful stingray --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
pribadihcr/controlnetsdxl_Pictures_20240920_0
pribadihcr
2025-08-13T10:01:21Z
3
0
diffusers
[ "diffusers", "safetensors", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "controlnet", "diffusers-training", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2025-07-30T08:12:38Z
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 library_name: diffusers license: openrail++ inference: true tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - controlnet - diffusers-training - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - controlnet - diffusers-training --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # controlnet-pribadihcr/controlnetsdxl_Pictures_20240920_0 These are controlnet weights trained on stabilityai/stable-diffusion-xl-base-1.0 with new type of conditioning. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
Axelerate/blockassist-bc-foxy_patterned_iguana_1755079064
Axelerate
2025-08-13T09:59:02Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "foxy patterned iguana", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T09:58:44Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - foxy patterned iguana --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
mang3dd/blockassist-bc-tangled_slithering_alligator_1755077489
mang3dd
2025-08-13T09:58:30Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "tangled slithering alligator", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T09:58:26Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - tangled slithering alligator --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
zeliang0426/qwen25_code_r1_grpo_think
zeliang0426
2025-08-13T09:57:18Z
0
0
transformers
[ "transformers", "safetensors", "qwen2_adapter", "text-generation", "generated_from_trainer", "grpo", "trl", "conversational", "custom_code", "arxiv:2402.03300", "autotrain_compatible", "region:us" ]
text-generation
2025-08-12T22:39:15Z
--- library_name: transformers model_name: qwen25_code_r1_grpo_think tags: - generated_from_trainer - grpo - trl licence: license --- # Model Card for qwen25_code_r1_grpo_think This model is a fine-tuned version of [None](https://huggingface.co/None). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="zeliang0426/qwen25_code_r1_grpo_think", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/zlzhang/verl/runs/7244936824.68137-f28d23aa-e18f-4325-9626-5b70c73c7b13) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.20.0.dev0 - Transformers: 4.53.0 - Pytorch: 2.7.1+cu126 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
daslab-testing/Qwen3-8B-FPQuant-QAT-MXFP4-1000steps
daslab-testing
2025-08-13T09:55:54Z
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "fp_quant", "region:us" ]
text-generation
2025-08-13T09:54:51Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
indoempatnol/blockassist-bc-fishy_wary_swan_1755077239
indoempatnol
2025-08-13T09:54:53Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "fishy wary swan", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T09:54:49Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - fishy wary swan --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
AmanPriyanshu/gpt-oss-14.3b-specialized-instruction_following-pruned-moe-only-21-experts
AmanPriyanshu
2025-08-13T09:53:59Z
0
0
null
[ "safetensors", "gpt_oss", "mixture-of-experts", "moe", "expert-pruning", "gpt-oss", "openai", "reasoning", "instruction-following", "specialized", "efficient", "transformer", "causal-lm", "text-generation", "pytorch", "pruned-model", "domain-specific", "conversational", "en", "dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations", "license:apache-2.0", "region:us" ]
text-generation
2025-08-13T09:53:18Z
--- license: apache-2.0 datasets: - AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations language: - en pipeline_tag: text-generation tags: - mixture-of-experts - moe - expert-pruning - gpt-oss - openai - reasoning - instruction-following - specialized - efficient - transformer - causal-lm - text-generation - pytorch - pruned-model - domain-specific --- # Instruction Following GPT-OSS Model (21 Experts) **Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/ <div align="center"> ### 👥 Follow the Authors **Aman Priyanshu** [![LinkedIn](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/aman-priyanshu/) [![Twitter](https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://x.com/AmanPriyanshu6) [![Website](https://img.shields.io/badge/Website-FF7139?style=for-the-badge&logo=firefox&logoColor=white)](https://amanpriyanshu.github.io/) **Supriti Vijay** [![LinkedIn](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/supriti-vijay/) [![Twitter](https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://x.com/SupritiVijay) [![Website](https://img.shields.io/badge/Website-FF7139?style=for-the-badge&logo=firefox&logoColor=white)](https://supritivijay.github.io/) </div> ## Introduction This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 21 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for instruction following tasks. **⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use. This pruning approach reduces the model size while attempting to preserve performance on the target domain. ## Model Architecture & Statistics | Metric | Value | |--------|-------| | **Base Model** | openai/gpt-oss-20b | | **Architecture** | Mixture-of-Experts Transformer | | **Total Parameters** | ~14.3B (pruned from 21B) | | **Original Experts per Layer** | 32 | | **Pruned Experts per Layer** | 21 | | **Layers** | 24 | | **Top-k Routing** | 4 | | **Context Length** | 128K tokens | | **Attention Heads** | 64 (Query), 8 (Key-Value) | | **Residual Dimension** | 2880 | | **Attention Pattern** | Alternating dense & sliding window (128 tokens) | | **Positional Encoding** | RoPE (Rotary Position Embedding) | | **Normalization** | RMSNorm | | **Precision** | BF16 | | **License** | Apache 2.0 | | **Specialization** | Instruction Following | ## Pruning Methodology ### What is Expert Pruning? Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves: 1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks 2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain 3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts ### Our Approach - **Data-Driven Selection**: Used activation patterns from instruction following evaluation tasks - **Systematic Reduction**: Reduced from 32 to 21 experts per layer - **No Retraining**: Direct removal without additional training steps ## Performance & Applications ### Pruning Benefits - **Smaller Memory Footprint**: 65.6% of original expert parameters - **Reduced Computational Load**: Fewer routing decisions during inference - **Focused Capabilities**: Retains experts relevant to instruction following tasks ### Use Cases - **Speculative Decoding**: Draft model for full GPT-OSS-20B - **Resource-Constrained Deployment**: Edge devices, mobile applications - **Research**: Study expert specialization in MoE models - **Fine-tuning**: Smaller base model for domain adaptation *Note: Performance may vary depending on how well the pruned experts match your specific use case.* ## Motivation & Expert Selection This instruction-following model leverages experts that excelled at constraint satisfaction tasks from Tulu3 Persona Instruction Following dataset. These experts specialize in precise adherence to user specifications and formatting requirements. The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks: - **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets) - **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law - **SORRY-Bench**: Safety evaluation across harmful content categories - **Tulu3**: Persona-driven instruction following with verifiable constraints - **Polyglot-or-Not**: Multilingual factual completion tasks By identifying experts that consistently activated for instruction following tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 21 experts per layer. ## Dataset & Analysis Foundation This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at: 🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations** The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning. ### Pruning Methodology Our approach involves: 1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks 2. **Expert Ranking**: Identification of the most frequently activated experts for target domains 3. **Systematic Pruning**: Reduction from 32 to 21 experts while preserving router functionality 4. **Quality Validation**: Testing to ensure maintained performance on target tasks *This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.* ## Usage ### CPU Inference ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Load the specialized model on CPU model = AutoModelForCausalLM.from_pretrained( "AmanPriyanshu/gpt-oss-14.3b-specialized-instruction_following-pruned-moe-only-21-experts", torch_dtype=torch.bfloat16, device_map="cpu", trust_remote_code=True ) tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-14.3b-specialized-instruction_following-pruned-moe-only-21-experts") # Generate with the model messages = [ {"role": "user", "content": "Write a formal email to a professor requesting a meeting, including: subject line, greeting, purpose, proposed times, and professional closing."} ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt", return_dict=True, reasoning_effort="medium" ) # Ensure inputs are on the same device as model inputs = {k: v.to(model.device) for k, v in inputs.items()} outputs = model.generate( **inputs, max_new_tokens=512, do_sample=True, temperature=0.1, top_p=0.9, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id ) # Decode only the generated part input_length = inputs['input_ids'].shape[1] response_tokens = outputs[0][input_length:] response = tokenizer.decode(response_tokens, skip_special_tokens=True) print(response) ``` ### Apple Silicon (MPS) Inference ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Check MPS availability and load model device = "mps" if torch.backends.mps.is_available() else "cpu" model = AutoModelForCausalLM.from_pretrained( "AmanPriyanshu/gpt-oss-14.3b-specialized-instruction_following-pruned-moe-only-21-experts", torch_dtype=torch.float16, # Better MPS compatibility device_map=device, trust_remote_code=True, low_cpu_mem_usage=True ) tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-14.3b-specialized-instruction_following-pruned-moe-only-21-experts") # Generate with the model messages = [ {"role": "user", "content": "Write a formal email to a professor requesting a meeting, including: subject line, greeting, purpose, proposed times, and professional closing."} ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt", return_dict=True, reasoning_effort="medium" ) # Move inputs to model device inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()} # Use torch.no_grad for MPS stability with torch.no_grad(): outputs = model.generate( **inputs, max_new_tokens=512, do_sample=True, temperature=0.1, top_p=0.9, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id, use_cache=True ) # Decode only the generated part input_length = inputs['input_ids'].shape[1] response_tokens = outputs[0][input_length:] response = tokenizer.decode(response_tokens, skip_special_tokens=True) print(response) ``` ### GPU Inference For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change: ```python device_map="auto" # Will automatically use GPU if available torch_dtype=torch.bfloat16 # or torch.float16 ``` ## Model Examples The following examples demonstrate the model's performance on instruction following tasks: ### Example 1 **Question:** Write a formal email to a professor requesting a meeting, including: subject line, greeting, purpose, proposed times, and professional closing. **Response:** ``` <|channel|>analysis<|message|>Need formal email.<|end|><|start|>assistant<|channel|>final<|message|>**Subject:** Request for Meeting – Discussion of Research Project Progress Dear Professor [Last Name], I hope this message finds you well. I am writing to request a brief meeting with you to discuss the current status of my research project on [brief topic], as well as to seek your guidance on the next steps. I would be available at the following times over the next week: - Tuesday, [Date], 10:00 – 11:00 a.m. - Wednesday, [Date], 2:00 – 3:00 p.m. - Friday, [Date], 9:30 – 10:30 a.m. If any of these slots are convenient for you, please let me know, and I will adjust my schedule accordingly. If you prefer a different time, I am happy to accommodate. Thank you for your time and consideration. I look forward to your guidance. Sincerely, [Your Full Name] [Your Program / Department] [University Name] [Phone Number] | [Email Address]<|return|> ``` --- ### Example 2 **Question:** Create a numbered list of steps for making scrambled eggs, using exactly 6 steps. **Response:** ``` <|channel|>analysis<|message|>Need 6 steps numbered.<|end|><|start|>assistant<|channel|>final<|message|>1. **Heat the pan** – Place a non‑stick skillet over medium heat and let it warm up for a minute or two. 2. **Add a little fat** – Drop a teaspoon of butter, oil, or a pat of softened butter into the pan and swirl it around until it melts and coats the surface. 3. **Crack the eggs** – Crack the desired number of eggs into a bowl, then whisk them together with a splash of milk or water (about 1 tablespoon per egg) and a pinch of salt or pepper. 4. **Pour the mixture** – Gently pour the whisked eggs into the hot, gre‑ed pan, letting them spread out evenly. 5. **Cook and stir** – Let the eggs sit for 30–60 seconds, then use a spatula to stir gently, lifting the edges and folding the center until the whites are set but the yolks remain slightly runny. 6. **Serve immediately** – Once the eggs are cooked to your liking, slide them onto a plate, add any extra seasonings or toppings, and enjoy while hot.<|return|> ``` --- ### Example 3 **Question:** Summarize the water cycle in exactly 50 words, no more, no less. **Response:** ``` <|channel|>analysis<|message|>Need 50 words exactly. Let's craft: "Water evaporates from oceans, lakes, and soil, forming clouds. Precipitation returns water to Earth as rain, snow, or sleet. Streams and rivers transport it to larger bodies, where it infiltrates groundwater. Plants absorb it, releasing vapor through transpiration. The cycle repeats, sustaining ecosystems and human life." Count words. Let's count: Water(1) evaporates2 from3 oceans,4 lakes,5 and6 soil,7 forming8 clouds.9 Precipitation10 returns11 water12 to13 Earth14 as15 rain,16 snow,17 or18 sleet.19 Streams20 and21 rivers22 transport23 it24 to25 larger26 bodies,27 where28 it29 infiltrates30 groundwater.31 Plants32 absorb33 it,34 releasing35 vapor36 through37 transpiration.38 The39 cycle40 repeats,41 sustaining42 ecosystems43 and44 human45 life46. Need 4 more words. Add "for"47 "all"48 "generations"49 "daily"50. Final sentence: "The cycle repeats, sustaining ecosystems and human life for all generations daily." Count again: The1 cycle2 repeats,3 sustaining4 ecosystems5 and6 human7 life8 for9 all10 generations11 daily12. That's 12 words, too many. Need 4 words total. We already had 46 words. Need 4 more words to reach 50. Add "for"47 "all"48 "generations"49 "daily"50. But we need to attach to sentence. Could say: "The cycle repeats, sustaining ecosystems and human life for all generations daily." Count words: The1 cycle2 repeats,3 sustaining4 ecosystems5 and6 human7 life8 for9 all10 generations11 daily12. That's 12 words, too many. We need only 4 words added, not 12. So we need to add 4 words somewhere else, not whole phrase. Maybe add "in" "every" "season" "year". Let's add at end: "in every season year". Count words: in1 every2 season3 year4. That's 4 words. Add to end of paragraph. So final paragraph ends with "...daily in every season year." Count again entire paragraph. Let's recount all words. Water1 evaporates2 from3 oceans,4 lakes,5 and6 soil,7 forming8 clouds.9 Precipitation10 returns11 water12 to13 ``` --- ## Citation If you use this model in your research, please cite: ```bibtex @misc{priyanshu2025gptoss, title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}}, author={Priyanshu, Aman and Vijay, Supriti}, year={2025}, howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}}, note={Interactive analysis tool for expert activation patterns in MoE architectures} } ``` ## References & Resources - **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/) - **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b) - **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations) - **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/) - **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
rwillh11/mdeberta_NLI_policy_AlignedFocal
rwillh11
2025-08-13T09:53:02Z
0
0
transformers
[ "transformers", "safetensors", "deberta-v2", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2025-08-13T09:52:00Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Jovar1/blockassist-bc-bold_hulking_rooster_1755078624
Jovar1
2025-08-13T09:52:20Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "bold hulking rooster", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T09:51:09Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - bold hulking rooster --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
AmanPriyanshu/gpt-oss-13.1b-specialized-instruction_following-pruned-moe-only-19-experts
AmanPriyanshu
2025-08-13T09:49:36Z
0
0
null
[ "safetensors", "gpt_oss", "mixture-of-experts", "moe", "expert-pruning", "gpt-oss", "openai", "reasoning", "instruction-following", "specialized", "efficient", "transformer", "causal-lm", "text-generation", "pytorch", "pruned-model", "domain-specific", "conversational", "en", "dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations", "license:apache-2.0", "region:us" ]
text-generation
2025-08-13T09:48:58Z
--- license: apache-2.0 datasets: - AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations language: - en pipeline_tag: text-generation tags: - mixture-of-experts - moe - expert-pruning - gpt-oss - openai - reasoning - instruction-following - specialized - efficient - transformer - causal-lm - text-generation - pytorch - pruned-model - domain-specific --- # Instruction Following GPT-OSS Model (19 Experts) **Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/ <div align="center"> ### 👥 Follow the Authors **Aman Priyanshu** [![LinkedIn](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/aman-priyanshu/) [![Twitter](https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://x.com/AmanPriyanshu6) [![Website](https://img.shields.io/badge/Website-FF7139?style=for-the-badge&logo=firefox&logoColor=white)](https://amanpriyanshu.github.io/) **Supriti Vijay** [![LinkedIn](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/supriti-vijay/) [![Twitter](https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://x.com/SupritiVijay) [![Website](https://img.shields.io/badge/Website-FF7139?style=for-the-badge&logo=firefox&logoColor=white)](https://supritivijay.github.io/) </div> ## Introduction This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 19 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for instruction following tasks. **⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use. This pruning approach reduces the model size while attempting to preserve performance on the target domain. ## Model Architecture & Statistics | Metric | Value | |--------|-------| | **Base Model** | openai/gpt-oss-20b | | **Architecture** | Mixture-of-Experts Transformer | | **Total Parameters** | ~13.1B (pruned from 21B) | | **Original Experts per Layer** | 32 | | **Pruned Experts per Layer** | 19 | | **Layers** | 24 | | **Top-k Routing** | 4 | | **Context Length** | 128K tokens | | **Attention Heads** | 64 (Query), 8 (Key-Value) | | **Residual Dimension** | 2880 | | **Attention Pattern** | Alternating dense & sliding window (128 tokens) | | **Positional Encoding** | RoPE (Rotary Position Embedding) | | **Normalization** | RMSNorm | | **Precision** | BF16 | | **License** | Apache 2.0 | | **Specialization** | Instruction Following | ## Pruning Methodology ### What is Expert Pruning? Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves: 1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks 2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain 3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts ### Our Approach - **Data-Driven Selection**: Used activation patterns from instruction following evaluation tasks - **Systematic Reduction**: Reduced from 32 to 19 experts per layer - **No Retraining**: Direct removal without additional training steps ## Performance & Applications ### Pruning Benefits - **Smaller Memory Footprint**: 59.4% of original expert parameters - **Reduced Computational Load**: Fewer routing decisions during inference - **Focused Capabilities**: Retains experts relevant to instruction following tasks ### Use Cases - **Speculative Decoding**: Draft model for full GPT-OSS-20B - **Resource-Constrained Deployment**: Edge devices, mobile applications - **Research**: Study expert specialization in MoE models - **Fine-tuning**: Smaller base model for domain adaptation *Note: Performance may vary depending on how well the pruned experts match your specific use case.* ## Motivation & Expert Selection This instruction-following model leverages experts that excelled at constraint satisfaction tasks from Tulu3 Persona Instruction Following dataset. These experts specialize in precise adherence to user specifications and formatting requirements. The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks: - **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets) - **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law - **SORRY-Bench**: Safety evaluation across harmful content categories - **Tulu3**: Persona-driven instruction following with verifiable constraints - **Polyglot-or-Not**: Multilingual factual completion tasks By identifying experts that consistently activated for instruction following tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 19 experts per layer. ## Dataset & Analysis Foundation This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at: 🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations** The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning. ### Pruning Methodology Our approach involves: 1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks 2. **Expert Ranking**: Identification of the most frequently activated experts for target domains 3. **Systematic Pruning**: Reduction from 32 to 19 experts while preserving router functionality 4. **Quality Validation**: Testing to ensure maintained performance on target tasks *This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.* ## Usage ### CPU Inference ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Load the specialized model on CPU model = AutoModelForCausalLM.from_pretrained( "AmanPriyanshu/gpt-oss-13.1b-specialized-instruction_following-pruned-moe-only-19-experts", torch_dtype=torch.bfloat16, device_map="cpu", trust_remote_code=True ) tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-13.1b-specialized-instruction_following-pruned-moe-only-19-experts") # Generate with the model messages = [ {"role": "user", "content": "Write a formal email to a professor requesting a meeting, including: subject line, greeting, purpose, proposed times, and professional closing."} ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt", return_dict=True, reasoning_effort="medium" ) # Ensure inputs are on the same device as model inputs = {k: v.to(model.device) for k, v in inputs.items()} outputs = model.generate( **inputs, max_new_tokens=512, do_sample=True, temperature=0.1, top_p=0.9, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id ) # Decode only the generated part input_length = inputs['input_ids'].shape[1] response_tokens = outputs[0][input_length:] response = tokenizer.decode(response_tokens, skip_special_tokens=True) print(response) ``` ### Apple Silicon (MPS) Inference ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Check MPS availability and load model device = "mps" if torch.backends.mps.is_available() else "cpu" model = AutoModelForCausalLM.from_pretrained( "AmanPriyanshu/gpt-oss-13.1b-specialized-instruction_following-pruned-moe-only-19-experts", torch_dtype=torch.float16, # Better MPS compatibility device_map=device, trust_remote_code=True, low_cpu_mem_usage=True ) tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-13.1b-specialized-instruction_following-pruned-moe-only-19-experts") # Generate with the model messages = [ {"role": "user", "content": "Write a formal email to a professor requesting a meeting, including: subject line, greeting, purpose, proposed times, and professional closing."} ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt", return_dict=True, reasoning_effort="medium" ) # Move inputs to model device inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()} # Use torch.no_grad for MPS stability with torch.no_grad(): outputs = model.generate( **inputs, max_new_tokens=512, do_sample=True, temperature=0.1, top_p=0.9, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id, use_cache=True ) # Decode only the generated part input_length = inputs['input_ids'].shape[1] response_tokens = outputs[0][input_length:] response = tokenizer.decode(response_tokens, skip_special_tokens=True) print(response) ``` ### GPU Inference For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change: ```python device_map="auto" # Will automatically use GPU if available torch_dtype=torch.bfloat16 # or torch.float16 ``` ## Model Examples The following examples demonstrate the model's performance on instruction following tasks: ### Example 1 **Question:** Write a formal email to a professor requesting a meeting, including: subject line, greeting, purpose, proposed times, and professional closing. **Response:** ``` <|channel|>analysis<|message|>Need formal email.<|end|><|start|>assistant<|channel|>final<|message|>**Subject:** Request for Meeting – Discussion of Research Proposal Dear Professor [Last Name], I hope this message finds you well. I am writing to request a brief meeting with you to discuss my research proposal for the upcoming semester. Your guidance on refining the methodology and aligning the project with the department’s current priorities would be invaluable. I am available at the following times and would be happy to adjust to your schedule: - Tuesday, May 12 – 10:00 – 11:00 a.m. - Wednesday, May 13 – 14:00 – 15:00 p.m. - Friday, May 15 – 9:00 – 10:00 a.m. Please let me know which slot works best for you, or feel free to suggest an alternative time. Thank you for considering my request. I look forward to the opportunity to benefit from your expertise. Sincerely, [Your Full Name] [Your Program / Department] [University Name] [Phone Number] | [Email Address]<|return|> ``` --- ### Example 2 **Question:** Create a numbered list of steps for making scrambled eggs, using exactly 6 steps. **Response:** ``` <|channel|>analysis<|message|>Need 6 steps numbered.<|end|><|start|>assistant<|channel|>final<|message|>1. **Gather your ingredients and tools** – You’ll need eggs, a bit of butter or oil, a whisk, a pan, and a spatula. 2. **Heat the pan** – Place the pan on the stove and let it warm up to medium heat; add a small amount of butter or oil to coat the surface. 3. **Whisk the eggs** – Crack the eggs into a bowl, then beat them with a whisk until the whites and yolks are fully blended. 4. **Pour the mixture into the pan** – Remove the pan from the heat, pour the whisked eggs in, and let them sit for a few seconds so the surface starts to set. 5. **Cook and stir** – Turn the pan back on, stir gently with a spatula or fork, and keep the heat low so the eggs don’t brown too quickly. 6. **Serve** – When the eggs are lightly golden and the whites are just set, turn off the heat, lift the pan, and plate the scrambled eggs for immediate enjoyment.<|return|> ``` --- ### Example 3 **Question:** Summarize the water cycle in exactly 50 words, no more, no less. **Response:** ``` <|channel|>analysis<|message|>Need 50 words exactly. Let's craft: "Water cycles through Earth's systems, beginning with evaporation from oceans, lakes, and soil. Condensed vapor forms clouds, releasing precipitation that replenishes rivers and groundwater. Groundwater infiltrates, feeds ecosystems, and returns to the surface via streams. Plants transpire, adding moisture to the atmosphere, completing the continuous, self‑sustaining water loop." Count words. Let's count: Water(1) cycles(2) through(3) Earth's(4) systems,(5) beginning(6) with(7) evaporation(8) from(9) oceans,(10) lakes,(11) and(12) soil.(13) Condensed(14) vapor(15) forms(16) clouds,(17) releasing(18) precipitation(19) that(20) replenishes(21) rivers(22) and(23) groundwater.(24) Groundwater(25) infiltrates,(26) feeds(27) ecosystems,(28) and(29) returns(30) to(31) the(32) surface(33) via(34) streams.(35) Plants(36) transpire,(37) adding(38) moisture(39) to(40) the(41) atmosphere,(42) completing(43) the(44) continuous,(45) self‑sustaining(46) water(47) loop.(48) Need 2 more words. Add "everyday" and "cycle". So add at end: everyday cycle. Let's add: "everyday cycle." That adds two words: everyday(49) cycle(50). Ensure punctuation fine. Final sentence: ... loop. everyday cycle. Good.<|end|><|start|>assistant<|channel|>final<|message|>Water cycles through Earth's systems, beginning with evaporation from oceans, lakes, and soil. Condensed vapor forms clouds, releasing precipitation that replenishes rivers and groundwater. Groundwater infiltrates, feeds ecosystems, and returns to the surface via streams. Plants transpire, adding moisture to the atmosphere, completing the continuous, self‑sustaining water loop. everyday cycle.<|return|> ``` --- ## Citation If you use this model in your research, please cite: ```bibtex @misc{priyanshu2025gptoss, title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}}, author={Priyanshu, Aman and Vijay, Supriti}, year={2025}, howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}}, note={Interactive analysis tool for expert activation patterns in MoE architectures} } ``` ## References & Resources - **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/) - **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b) - **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations) - **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/) - **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
Stefanaz/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-eager_melodic_mosquito
Stefanaz
2025-08-13T09:46:30Z
95
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "rl-swarm", "genrl-swarm", "grpo", "gensyn", "I am eager_melodic_mosquito", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-09T15:45:06Z
--- library_name: transformers tags: - rl-swarm - genrl-swarm - grpo - gensyn - I am eager_melodic_mosquito --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AmanPriyanshu/gpt-oss-12.0b-specialized-instruction_following-pruned-moe-only-17-experts
AmanPriyanshu
2025-08-13T09:45:23Z
0
0
null
[ "safetensors", "gpt_oss", "mixture-of-experts", "moe", "expert-pruning", "gpt-oss", "openai", "reasoning", "instruction-following", "specialized", "efficient", "transformer", "causal-lm", "text-generation", "pytorch", "pruned-model", "domain-specific", "conversational", "en", "dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations", "license:apache-2.0", "region:us" ]
text-generation
2025-08-13T09:44:46Z
--- license: apache-2.0 datasets: - AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations language: - en pipeline_tag: text-generation tags: - mixture-of-experts - moe - expert-pruning - gpt-oss - openai - reasoning - instruction-following - specialized - efficient - transformer - causal-lm - text-generation - pytorch - pruned-model - domain-specific --- # Instruction Following GPT-OSS Model (17 Experts) **Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/ <div align="center"> ### 👥 Follow the Authors **Aman Priyanshu** [![LinkedIn](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/aman-priyanshu/) [![Twitter](https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://x.com/AmanPriyanshu6) [![Website](https://img.shields.io/badge/Website-FF7139?style=for-the-badge&logo=firefox&logoColor=white)](https://amanpriyanshu.github.io/) **Supriti Vijay** [![LinkedIn](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/supriti-vijay/) [![Twitter](https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://x.com/SupritiVijay) [![Website](https://img.shields.io/badge/Website-FF7139?style=for-the-badge&logo=firefox&logoColor=white)](https://supritivijay.github.io/) </div> ## Introduction This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 17 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for instruction following tasks. **⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use. This pruning approach reduces the model size while attempting to preserve performance on the target domain. ## Model Architecture & Statistics | Metric | Value | |--------|-------| | **Base Model** | openai/gpt-oss-20b | | **Architecture** | Mixture-of-Experts Transformer | | **Total Parameters** | ~12.0B (pruned from 21B) | | **Original Experts per Layer** | 32 | | **Pruned Experts per Layer** | 17 | | **Layers** | 24 | | **Top-k Routing** | 4 | | **Context Length** | 128K tokens | | **Attention Heads** | 64 (Query), 8 (Key-Value) | | **Residual Dimension** | 2880 | | **Attention Pattern** | Alternating dense & sliding window (128 tokens) | | **Positional Encoding** | RoPE (Rotary Position Embedding) | | **Normalization** | RMSNorm | | **Precision** | BF16 | | **License** | Apache 2.0 | | **Specialization** | Instruction Following | ## Pruning Methodology ### What is Expert Pruning? Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves: 1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks 2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain 3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts ### Our Approach - **Data-Driven Selection**: Used activation patterns from instruction following evaluation tasks - **Systematic Reduction**: Reduced from 32 to 17 experts per layer - **No Retraining**: Direct removal without additional training steps ## Performance & Applications ### Pruning Benefits - **Smaller Memory Footprint**: 53.1% of original expert parameters - **Reduced Computational Load**: Fewer routing decisions during inference - **Focused Capabilities**: Retains experts relevant to instruction following tasks ### Use Cases - **Speculative Decoding**: Draft model for full GPT-OSS-20B - **Resource-Constrained Deployment**: Edge devices, mobile applications - **Research**: Study expert specialization in MoE models - **Fine-tuning**: Smaller base model for domain adaptation *Note: Performance may vary depending on how well the pruned experts match your specific use case.* ## Motivation & Expert Selection This instruction-following model leverages experts that excelled at constraint satisfaction tasks from Tulu3 Persona Instruction Following dataset. These experts specialize in precise adherence to user specifications and formatting requirements. The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks: - **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets) - **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law - **SORRY-Bench**: Safety evaluation across harmful content categories - **Tulu3**: Persona-driven instruction following with verifiable constraints - **Polyglot-or-Not**: Multilingual factual completion tasks By identifying experts that consistently activated for instruction following tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 17 experts per layer. ## Dataset & Analysis Foundation This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at: 🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations** The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning. ### Pruning Methodology Our approach involves: 1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks 2. **Expert Ranking**: Identification of the most frequently activated experts for target domains 3. **Systematic Pruning**: Reduction from 32 to 17 experts while preserving router functionality 4. **Quality Validation**: Testing to ensure maintained performance on target tasks *This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.* ## Usage ### CPU Inference ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Load the specialized model on CPU model = AutoModelForCausalLM.from_pretrained( "AmanPriyanshu/gpt-oss-12.0b-specialized-instruction_following-pruned-moe-only-17-experts", torch_dtype=torch.bfloat16, device_map="cpu", trust_remote_code=True ) tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-12.0b-specialized-instruction_following-pruned-moe-only-17-experts") # Generate with the model messages = [ {"role": "user", "content": "Write a formal email to a professor requesting a meeting, including: subject line, greeting, purpose, proposed times, and professional closing."} ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt", return_dict=True, reasoning_effort="medium" ) # Ensure inputs are on the same device as model inputs = {k: v.to(model.device) for k, v in inputs.items()} outputs = model.generate( **inputs, max_new_tokens=512, do_sample=True, temperature=0.1, top_p=0.9, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id ) # Decode only the generated part input_length = inputs['input_ids'].shape[1] response_tokens = outputs[0][input_length:] response = tokenizer.decode(response_tokens, skip_special_tokens=True) print(response) ``` ### Apple Silicon (MPS) Inference ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Check MPS availability and load model device = "mps" if torch.backends.mps.is_available() else "cpu" model = AutoModelForCausalLM.from_pretrained( "AmanPriyanshu/gpt-oss-12.0b-specialized-instruction_following-pruned-moe-only-17-experts", torch_dtype=torch.float16, # Better MPS compatibility device_map=device, trust_remote_code=True, low_cpu_mem_usage=True ) tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-12.0b-specialized-instruction_following-pruned-moe-only-17-experts") # Generate with the model messages = [ {"role": "user", "content": "Write a formal email to a professor requesting a meeting, including: subject line, greeting, purpose, proposed times, and professional closing."} ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt", return_dict=True, reasoning_effort="medium" ) # Move inputs to model device inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()} # Use torch.no_grad for MPS stability with torch.no_grad(): outputs = model.generate( **inputs, max_new_tokens=512, do_sample=True, temperature=0.1, top_p=0.9, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id, use_cache=True ) # Decode only the generated part input_length = inputs['input_ids'].shape[1] response_tokens = outputs[0][input_length:] response = tokenizer.decode(response_tokens, skip_special_tokens=True) print(response) ``` ### GPU Inference For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change: ```python device_map="auto" # Will automatically use GPU if available torch_dtype=torch.bfloat16 # or torch.float16 ``` ## Model Examples The following examples demonstrate the model's performance on instruction following tasks: ### Example 1 **Question:** Write a formal email to a professor requesting a meeting, including: subject line, greeting, purpose, proposed times, and professional closing. **Response:** ``` <|channel|>analysis<|message|>Need formal email.<|end|><|start|>assistant<|channel|>final<|message|>**Subject:** Request for Meeting – Discussion of Research Proposal Dear Professor [Last Name], I hope this message finds you well. I am writing to request a brief meeting at your convenience to discuss the progress of my research proposal on “[Brief Topic]” and to seek your guidance on the next steps. Could we meet on one of the following dates? - Tuesday, April 12 at 10:00 a.m. - Wednesday, April 13 at 2:00 p.m. - Friday, April 15 at 11:30 a.m. If none of these times suit you, I would be happy to adjust to a slot that fits your schedule. Thank you for your time and consideration. I look forward to your feedback. Sincerely, [Your Full Name] [Program / Department] [University] [Phone] | [Email]<|return|> ``` --- ### Example 2 **Question:** Create a numbered list of steps for making scrambled eggs, using exactly 6 steps. **Response:** ``` <|channel|>analysis<|message|>Need 6 steps numbered.<|end|><|start|>assistant<|channel|>final<|message|>1. **Gather ingredients and tools** – Get a bowl of eggs, a small amount of butter or oil, a whisk, a skillet, and a spatula. 2. **Whate the eggs** – Crack the eggs into a bowl and whisk them together until the yoly and whites are fully blended. 3. **Heat the skillet** – Place the skillet on the stove, turn it to medium‑high heat, and add a splash of butter or oil, letting it melt into a thin, golden film. 4. **Pour the mixture in** – Remove the skillet from the heat, then pour the whisked eggs over the buttered surface. 5. **Cook and stir** – Let the eggs sit for a few seconds, then stir gently with a spatula to keep them from clumping, and let them cook until the whites are lightly browned. 6. **Serve** – Once the eggs are fully cooked, lift the skillet, plate the scrambled eggs, and enjoy!<|return|> ``` --- ### Example 3 **Question:** Summarize the water cycle in exactly 50 words, no more, no less. **Response:** ``` <|channel|>analysis<|message|>Need 50 words exactly. Let's craft: Count words. I'll write and count. "Water begins in the atmosphere, conding into clouds. Clouds release precipitation, which falls as rain or snow. Ground absorbs it, forming rivers, lakes, and groundwater. Rivers flow to oceans, where evaporation returns water to the sky. The cycle repeats, sustaining ecosystems and human life worldwide." Count words: Water(1) begins2 in3 the4 atmosphere,5 conding6 into7 clouds.8 Clouds9 release10 precipitation,11 which12 falls13 as14 rain15 or16 snow.17 Ground18 absorbs19 it,20 forming21 rivers,22 lakes,23 and24 groundwater.25 Rivers26 flow27 to28 oceans,29 where30 evaporation31 returns32 water33 to34 the35 sky.36 The37 cycle38 repeats,39 sustaining40 ecosystems41 and42 human43 life44 worldwide45. Need 50 words, we have 45. Need 5 more words. Add "for" "all" "generations" "of" "life". Add at end: "for all generations of life." Count added words: for1 all2 generations3 of4 life5. Now total 45+5=50. Let's place at end after worldwide. So sentence: "worldwide for all generations of life." Count again final. Let's recount full sentence: Water1 begins2 in3 the4 atmosphere,5 conding6 into7 clouds.8 Clouds9 release10 precipitation,11 which12 falls13 as14 rain15 or16 snow.17 Ground18 absorbs19 it,20 forming21 rivers,22 lakes,23 and24 groundwater.25 Rivers26 flow27 to28 oceans,29 where30 evaporation31 returns32 water33 to34 the35 sky.36 The37 cycle38 repeats,39 sustaining40 ecosystems41 and42 human43 life44 worldwide5 for6 all7 generations8 of9 life10. Oops now 10 words added, total 44? Wait recount: earlier 45, added 5, but recount shows 10? Let's recount original again carefully. Original sentence words list: Water(1) begends(2) in(3) the(4) atmosphere,(5) conding(6) into(7) clouds.(8) Clouds(9) release(10) prec precipitation,(11) which(12) falls(13) as(14) rain(15) or(16) snow.(17) Ground( ``` --- ## Citation If you use this model in your research, please cite: ```bibtex @misc{priyanshu2025gptoss, title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}}, author={Priyanshu, Aman and Vijay, Supriti}, year={2025}, howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}}, note={Interactive analysis tool for expert activation patterns in MoE architectures} } ``` ## References & Resources - **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/) - **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b) - **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations) - **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/) - **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
LiquidAI/LFM2-VL-1.6B
LiquidAI
2025-08-13T09:44:29Z
0
50
transformers
[ "transformers", "safetensors", "lfm2-vl", "image-text-to-text", "liquid", "lfm2", "edge", "conversational", "custom_code", "en", "license:other", "region:us" ]
image-text-to-text
2025-08-12T16:38:52Z
--- library_name: transformers license: other license_name: lfm1.0 license_link: LICENSE language: - en pipeline_tag: image-text-to-text tags: - liquid - lfm2 - lfm2-vl - edge --- <center> <div style="text-align: center;"> <img src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/7_6D7rWrLxp2hb6OHSV1p.png" alt="Liquid AI" style="width: 100%; max-width: 66%; height: auto; display: inline-block; margin-bottom: 0.5em; margin-top: 0.5em;" /> </div> </center> # LFM2‑VL LFM2‑VL is [Liquid AI](https://www.liquid.ai/)'s first series of multimodal models, designed to process text and images with variable resolutions. Built on the [LFM2](https://huggingface.co/collections/LiquidAI/lfm2-686d721927015b2ad73eaa38) backbone, it is optimized for low-latency and edge AI applications. We're releasing the weights of two post-trained checkpoints with [450M](https://huggingface.co/LiquidAI/LFM2-VL-450M) (for highly constrained devices) and [1.6B](https://huggingface.co/LiquidAI/LFM2-VL-1.6B) (more capable yet still lightweight) parameters. * **2× faster inference speed** on GPUs compared to existing VLMs while maintaining competitive accuracy * **Flexible architecture** with user-tunable speed-quality tradeoffs at inference time * **Native resolution processing** up to 512×512 with intelligent patch-based handling for larger images, avoiding upscaling and distortion Find more about our vision-language model in the [LFM2-VL post](https://www.liquid.ai/blog/lfm2-vl-efficient-vision-language-models) and its language backbone in the [LFM2 blog post](https://www.liquid.ai/blog/liquid-foundation-models-v2-our-second-series-of-generative-ai-models). ## 📄 Model details Due to their small size, **we recommend fine-tuning LFM2-VL models on narrow use cases** to maximize performance. They were trained for instruction following and lightweight agentic flows. Not intended for safety‑critical decisions. | Property | [**LFM2-VL-450M**](https://huggingface.co/LiquidAI/LFM2-VL-450M) | [**LFM2-VL-1.6B**](https://huggingface.co/LiquidAI/LFM2-VL-1.6B) | |---|---:|---:| | **Parameters (LM only)** | 350M | 1.2B | | **Vision encoder** | SigLIP2 NaFlex base (86M) | SigLIP2 NaFlex shape‑optimized (400M) | | **Backbone layers** | hybrid conv+attention | hybrid conv+attention | | **Context (text)** | 32,768 tokens | 32,768 tokens | | **Image tokens** | dynamic, user‑tunable | dynamic, user‑tunable | | **Vocab size** | 65,536 | 65,536 | | **Precision** | bfloat16 | bfloat16 | | **License** | LFM Open License v1.0 | LFM Open License v1.0 | **Supported languages:** English **Generation parameters**: We recommend the following parameters: - Text: `temperature=0.1`, `min_p=0.15`, `repetition_penalty=1.05` - Vision: `min_image_tokens=64` `max_image_tokens=256`, `do_image_splitting=True` **Chat template**: LFM2-VL uses a ChatML-like chat template as follows: ``` <|startoftext|><|im_start|>system You are a helpful multimodal assistant by Liquid AI.<|im_end|> <|im_start|>user <image>Describe this image.<|im_end|> <|im_start|>assistant This image shows a Caenorhabditis elegans (C. elegans) nematode.<|im_end|> ``` Images are referenced with a sentinel (`<image>`), which is automatically replaced with the image tokens by the processor. You can apply it using the dedicated [`.apply_chat_template()`](https://huggingface.co/docs/transformers/en/chat_templating#applychattemplate) function from Hugging Face transformers. **Architecture** - **Hybrid backbone**: Language model tower (LFM2-1.2B or LFM2-350M) paired with SigLIP2 NaFlex vision encoders (400M shape-optimized or 86M base variant) - **Native resolution processing**: Handles images up to 512×512 pixels without upscaling and preserves non-standard aspect ratios without distortion - **Tiling strategy**: Splits large images into non-overlapping 512×512 patches and includes thumbnail encoding for global context (in 1.6B model) - **Efficient token mapping**: 2-layer MLP connector with pixel unshuffle reduces image tokens (e.g., 256×384 image → 96 tokens, 1000×3000 → 1,020 tokens) - **Inference-time flexibility**: User-tunable maximum image tokens and patch count for speed/quality tradeoff without retraining **Training approach** - Builds on the LFM2 base model with joint mid-training that fuses vision and language capabilities using a gradually adjusted text-to-image ratio - Applies joint SFT with emphasis on image understanding and vision tasks - Leverages large-scale open-source datasets combined with in-house synthetic vision data, selected for balanced task coverage - Follows a progressive training strategy: base model → joint mid-training → supervised fine-tuning ## 🏃 How to run LFM2-VL You can run LFM2-VL with Hugging Face [`transformers`](https://github.com/huggingface/transformers) v4.55 or more recent as follows: ```bash pip install -U transformers pillow ``` Here is an example of how to generate an answer with transformers in Python: ```python from transformers import AutoProcessor, AutoModelForImageTextToText from transformers.image_utils import load_image # Load model and processor model_id = "LiquidAI/LFM2-VL-1.6B" model = AutoModelForImageTextToText.from_pretrained( model_id, device_map="auto", torch_dtype="bfloat16", trust_remote_code=True ) processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True) # Load image and create conversation url = "https://www.ilankelman.org/stopsigns/australia.jpg" image = load_image(url) conversation = [ { "role": "user", "content": [ {"type": "image", "image": image}, {"type": "text", "text": "What is in this image?"}, ], }, ] # Generate Answer inputs = processor.apply_chat_template( conversation, add_generation_prompt=True, return_tensors="pt", return_dict=True, tokenize=True, ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=64) processor.batch_decode(outputs, skip_special_tokens=True)[0] # This image depicts a vibrant street scene in what appears to be a Chinatown or similar cultural area. The focal point is a large red stop sign with white lettering, mounted on a pole. ``` You can directly run and test the model with this [Colab notebook](https://colab.research.google.com/drive/11EMJhcVB6OTEuv--OePyGK86k-38WU3q?usp=sharing). ## 🔧 How to fine-tune We recommend fine-tuning LFM2-VL models on your use cases to maximize performance. | Notebook | Description | Link | |-----------|----------------------------------------------------------------------|------| | SFT (TRL) | Supervised Fine-Tuning (SFT) notebook with a LoRA adapter using TRL. | <a href="https://colab.research.google.com/drive/1csXCLwJx7wI7aruudBp6ZIcnqfv8EMYN?usp=sharing"><img src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/vlOyMEjwHa_b_LXysEu2E.png" width="110" alt="Colab link"></a> | ## 📈 Performance | Model | RealWorldQA | MM-IFEval | InfoVQA (Val) | OCRBench | BLINK | MMStar | MMMU (Val) | MathVista | SEEDBench_IMG | MMVet | MME | MMLU | |-------------------|-------------|-----------|---------------|----------|-------|--------|------------|-----------|---------------|-------|----------|-------| | InternVL3-2B | 65.10 | 38.49 | 66.10 | 831 | 53.10 | 61.10 | 48.70 | 57.60 | 75.00 | 67.00 | 2186.40 | 64.80 | | InternVL3-1B | 57.00 | 31.14 | 54.94 | 798 | 43.00 | 52.30 | 43.20 | 46.90 | 71.20 | 58.70 | 1912.40 | 49.80 | | SmolVLM2-2.2B | 57.50 | 19.42 | 37.75 | 725 | 42.30 | 46.00 | 41.60 | 51.50 | 71.30 | 34.90 | 1792.50 | - | | LFM2-VL-1.6B | 65.23 | 37.66 | 58.68 | 742 | 44.40 | 49.53 | 38.44 | 51.10 | 71.97 | 48.07 | 1753.04 | 50.99 | | Model | RealWorldQA | MM-IFEval | InfoVQA (Val) | OCRBench | BLINK | MMStar | MMMU (Val) | MathVista | SEEDBench_IMG | MMVet | MME | MMLU | |-------------------|-------------|-----------|---------------|----------|-------|--------|------------|-----------|---------------|-------|----------|-------| | SmolVLM2-500M | 49.90 | 11.27 | 24.64 | 609 | 40.70 | 38.20 | 34.10 | 37.50 | 62.20 | 29.90 | 1448.30 | - | | LFM2-VL-450M | 52.29 | 26.18 | 46.51 | 655 | 41.98 | 40.87 | 33.11 | 44.70 | 63.50 | 33.76 | 1239.06 | 40.16 | We obtained MM-IFEval and InfoVQA (Val) scores for InternVL 3 and SmolVLM2 models using VLMEvalKit. ## 📬 Contact If you are interested in custom solutions with edge deployment, please contact [our sales team](https://www.liquid.ai/contact).
GAMSAMBLE/ppo-LunarLander-v2
GAMSAMBLE
2025-08-13T09:44:01Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2025-08-13T09:05:17Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 258.66 +/- 69.55 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
AmanPriyanshu/gpt-oss-11.4b-specialized-instruction_following-pruned-moe-only-16-experts
AmanPriyanshu
2025-08-13T09:43:19Z
0
0
null
[ "safetensors", "gpt_oss", "mixture-of-experts", "moe", "expert-pruning", "gpt-oss", "openai", "reasoning", "instruction-following", "specialized", "efficient", "transformer", "causal-lm", "text-generation", "pytorch", "pruned-model", "domain-specific", "conversational", "en", "dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations", "license:apache-2.0", "region:us" ]
text-generation
2025-08-13T09:42:45Z
--- license: apache-2.0 datasets: - AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations language: - en pipeline_tag: text-generation tags: - mixture-of-experts - moe - expert-pruning - gpt-oss - openai - reasoning - instruction-following - specialized - efficient - transformer - causal-lm - text-generation - pytorch - pruned-model - domain-specific --- # Instruction Following GPT-OSS Model (16 Experts) **Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/ <div align="center"> ### 👥 Follow the Authors **Aman Priyanshu** [![LinkedIn](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/aman-priyanshu/) [![Twitter](https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://x.com/AmanPriyanshu6) [![Website](https://img.shields.io/badge/Website-FF7139?style=for-the-badge&logo=firefox&logoColor=white)](https://amanpriyanshu.github.io/) **Supriti Vijay** [![LinkedIn](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/supriti-vijay/) [![Twitter](https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://x.com/SupritiVijay) [![Website](https://img.shields.io/badge/Website-FF7139?style=for-the-badge&logo=firefox&logoColor=white)](https://supritivijay.github.io/) </div> ## Introduction This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 16 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for instruction following tasks. **⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use. This pruning approach reduces the model size while attempting to preserve performance on the target domain. ## Model Architecture & Statistics | Metric | Value | |--------|-------| | **Base Model** | openai/gpt-oss-20b | | **Architecture** | Mixture-of-Experts Transformer | | **Total Parameters** | ~11.4B (pruned from 21B) | | **Original Experts per Layer** | 32 | | **Pruned Experts per Layer** | 16 | | **Layers** | 24 | | **Top-k Routing** | 4 | | **Context Length** | 128K tokens | | **Attention Heads** | 64 (Query), 8 (Key-Value) | | **Residual Dimension** | 2880 | | **Attention Pattern** | Alternating dense & sliding window (128 tokens) | | **Positional Encoding** | RoPE (Rotary Position Embedding) | | **Normalization** | RMSNorm | | **Precision** | BF16 | | **License** | Apache 2.0 | | **Specialization** | Instruction Following | ## Pruning Methodology ### What is Expert Pruning? Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves: 1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks 2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain 3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts ### Our Approach - **Data-Driven Selection**: Used activation patterns from instruction following evaluation tasks - **Systematic Reduction**: Reduced from 32 to 16 experts per layer - **No Retraining**: Direct removal without additional training steps ## Performance & Applications ### Pruning Benefits - **Smaller Memory Footprint**: 50.0% of original expert parameters - **Reduced Computational Load**: Fewer routing decisions during inference - **Focused Capabilities**: Retains experts relevant to instruction following tasks ### Use Cases - **Speculative Decoding**: Draft model for full GPT-OSS-20B - **Resource-Constrained Deployment**: Edge devices, mobile applications - **Research**: Study expert specialization in MoE models - **Fine-tuning**: Smaller base model for domain adaptation *Note: Performance may vary depending on how well the pruned experts match your specific use case.* ## Motivation & Expert Selection This instruction-following model leverages experts that excelled at constraint satisfaction tasks from Tulu3 Persona Instruction Following dataset. These experts specialize in precise adherence to user specifications and formatting requirements. The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks: - **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets) - **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law - **SORRY-Bench**: Safety evaluation across harmful content categories - **Tulu3**: Persona-driven instruction following with verifiable constraints - **Polyglot-or-Not**: Multilingual factual completion tasks By identifying experts that consistently activated for instruction following tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 16 experts per layer. ## Dataset & Analysis Foundation This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at: 🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations** The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning. ### Pruning Methodology Our approach involves: 1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks 2. **Expert Ranking**: Identification of the most frequently activated experts for target domains 3. **Systematic Pruning**: Reduction from 32 to 16 experts while preserving router functionality 4. **Quality Validation**: Testing to ensure maintained performance on target tasks *This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.* ## Usage ### CPU Inference ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Load the specialized model on CPU model = AutoModelForCausalLM.from_pretrained( "AmanPriyanshu/gpt-oss-11.4b-specialized-instruction_following-pruned-moe-only-16-experts", torch_dtype=torch.bfloat16, device_map="cpu", trust_remote_code=True ) tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-11.4b-specialized-instruction_following-pruned-moe-only-16-experts") # Generate with the model messages = [ {"role": "user", "content": "Write a formal email to a professor requesting a meeting, including: subject line, greeting, purpose, proposed times, and professional closing."} ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt", return_dict=True, reasoning_effort="medium" ) # Ensure inputs are on the same device as model inputs = {k: v.to(model.device) for k, v in inputs.items()} outputs = model.generate( **inputs, max_new_tokens=512, do_sample=True, temperature=0.1, top_p=0.9, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id ) # Decode only the generated part input_length = inputs['input_ids'].shape[1] response_tokens = outputs[0][input_length:] response = tokenizer.decode(response_tokens, skip_special_tokens=True) print(response) ``` ### Apple Silicon (MPS) Inference ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Check MPS availability and load model device = "mps" if torch.backends.mps.is_available() else "cpu" model = AutoModelForCausalLM.from_pretrained( "AmanPriyanshu/gpt-oss-11.4b-specialized-instruction_following-pruned-moe-only-16-experts", torch_dtype=torch.float16, # Better MPS compatibility device_map=device, trust_remote_code=True, low_cpu_mem_usage=True ) tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-11.4b-specialized-instruction_following-pruned-moe-only-16-experts") # Generate with the model messages = [ {"role": "user", "content": "Write a formal email to a professor requesting a meeting, including: subject line, greeting, purpose, proposed times, and professional closing."} ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt", return_dict=True, reasoning_effort="medium" ) # Move inputs to model device inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()} # Use torch.no_grad for MPS stability with torch.no_grad(): outputs = model.generate( **inputs, max_new_tokens=512, do_sample=True, temperature=0.1, top_p=0.9, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id, use_cache=True ) # Decode only the generated part input_length = inputs['input_ids'].shape[1] response_tokens = outputs[0][input_length:] response = tokenizer.decode(response_tokens, skip_special_tokens=True) print(response) ``` ### GPU Inference For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change: ```python device_map="auto" # Will automatically use GPU if available torch_dtype=torch.bfloat16 # or torch.float16 ``` ## Model Examples The following examples demonstrate the model's performance on instruction following tasks: ### Example 1 **Question:** Write a formal email to a professor requesting a meeting, including: subject line, greeting, purpose, proposed times, and professional closing. **Response:** ``` <|channel|>analysis<|message|>Need formal email.<|end|><|start|>assistant<|channel|>final<|message|>**Subject:** Request for Meeting – Discussion of Research Proposal Dear Professor [Last Name], I hope this message finds you well. I would like to meet with you to discuss the progress of my research proposal on “[Brief Topic]” and to receive your guidance on the next steps. Could you please let me know if you are available on any of the following dates: - Tuesday, May 4 at 10:00 a.m. - Wednesday, May 5 at 2:00 p.m. - Friday, May 7 at 11:00 a.m. If none of these times work for you, I would be happy to adjust my schedule to accommodate your availability. Thank you for your time and consideration. I look forward to our conversation. Sincerely, [Your Full Name] [Program / Department] [University] [Phone] | [Email]<|return|> ``` --- ### Example 2 **Question:** Create a numbered list of steps for making scrambled eggs, using exactly 6 steps. **Response:** ``` <|channel|>analysis<|message|>Need 6 steps numbered.<|end|><|start|>assistant<|channel|>final<|message|>1. **Gather ingredients and tools** – Get eggs, butter or oil, salt, pepper, a whisk, a pan, and a spatula. 2. **Heat the pan** – Place the skillet on the stove and turn it to medium‑high heat; add butter or oil and let it melt. 3. **Beat the eggs** – Crack the eggs into a bowl, whisk them together, and season with a pinch of salt and pepper. 4. **Pour into the pan** – Remove the butter/o oil, stir the pan, then pour the whisked eggs in. 5. **Cook and stir** – Let the eggs bubble, then stir gently with the spatula to keep them from sticking. 6. **Serve** – When the eggs are lightly browned, lift the pan, plate the eggs, and enjoy.<|return|> ``` --- ### Example 3 **Question:** Summarize the water cycle in exactly 50 words, no more, no less. **Response:** ``` <|channel|>analysis<|message|>Need 50 words exactly. Count. Let's craft: "Water originates in the atmosphere as vapor, condens into clouds, then falls as precipitation. It infiltrates soils, feeds rivers, and reaches oceans. Ev evaporation returns to air, forming new clouds. Groundwater cycles underground, while human use redisates it. The cycle sustures ecosystems, climate, and life, perpetently renewing." Count words. Let's count: Water(1) originates2 in3 the4 atmosphere5 as6 vapor,7 condens8 into9 clouds,10 then11 falls12 as13 precipitation.14 It15 infiltrates16 soils,17 feeds18 rivers,19 and20 reaches21 oceans.22 Ev23 evaporation24 returns25 to26 air,27 forming28 new29 clouds.30 Groundwater31 cycles32 underground,33 while34 human35 use36 redisates37 it.38 The39 cycle40 sustures41 ecosystems,42 climate,43 and44 life,45 perpetently46 renewing. That's 46 words. Need 4 more. Add "for" "all" "generations". Add at end: "for all generations." Count new words: for(1) all(2) generations(3). Need 4 more. Add "every" "day" "and" "night". That's 4. So add "every day and night". Let's append: "every day and night". Count words added: every1 day2 and3 night4. Total words now: 46+4+3=54. Too many. Need 50. We need 4 more words, not 54. We had 46, need 4. So add exactly 4 words. Let's add "for all generations". That's 3 words? Wait "for"(1) "all"(2) "generations"(3). That's 3. Need 4. Add "every". So "for all generations every". That's 4 words. Add at end. So final sentence: "for all generations every." But grammar weird. Maybe "for all generations, every." That's 4? "for"(1) "all"(2) "generations,"(3) "every."(4). Good. Add that. So final text: previous 46 words + 4 = 50. Let's produce.<|end|><|start|>assistant<|channel|>final<|message|>Water originates in the atmosphere as vapor, condens into clouds, then falls as precipitation. It infiltrates soils, feeds rivers, and reaches oceans ``` --- ## Citation If you use this model in your research, please cite: ```bibtex @misc{priyanshu2025gptoss, title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}}, author={Priyanshu, Aman and Vijay, Supriti}, year={2025}, howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}}, note={Interactive analysis tool for expert activation patterns in MoE architectures} } ``` ## References & Resources - **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/) - **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b) - **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations) - **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/) - **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
ecamli/blockassist-bc-hulking_soft_hippo_1755078100
ecamli
2025-08-13T09:43:14Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "hulking soft hippo", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T09:42:19Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - hulking soft hippo --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
JayHyeon/Qwen_0.5-IPO_5e-7_1.0vpo_constant_ls0.0_seed42
JayHyeon
2025-08-13T09:41:30Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "dpo", "trl", "conversational", "dataset:JayHyeon/shp-dpo-converted", "arxiv:2305.18290", "base_model:Qwen/Qwen2.5-0.5B-Instruct", "base_model:finetune:Qwen/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-13T05:37:28Z
--- base_model: Qwen/Qwen2.5-0.5B-Instruct datasets: JayHyeon/shp-dpo-converted library_name: transformers model_name: Qwen_0.5-IPO_5e-7_1.0vpo_constant_ls0.0_seed42 tags: - generated_from_trainer - dpo - trl licence: license --- # Model Card for Qwen_0.5-IPO_5e-7_1.0vpo_constant_ls0.0_seed42 This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) on the [JayHyeon/shp-dpo-converted](https://huggingface.co/datasets/JayHyeon/shp-dpo-converted) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="JayHyeon/Qwen_0.5-IPO_5e-7_1.0vpo_constant_ls0.0_seed42", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/bonin147/huggingface/runs/h2lfp2tn) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.22.0.dev0 - Transformers: 4.55.0 - Pytorch: 2.8.0 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
AmanPriyanshu/gpt-oss-10.8b-specialized-instruction_following-pruned-moe-only-15-experts
AmanPriyanshu
2025-08-13T09:41:26Z
0
0
null
[ "safetensors", "gpt_oss", "mixture-of-experts", "moe", "expert-pruning", "gpt-oss", "openai", "reasoning", "instruction-following", "specialized", "efficient", "transformer", "causal-lm", "text-generation", "pytorch", "pruned-model", "domain-specific", "conversational", "en", "dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations", "license:apache-2.0", "region:us" ]
text-generation
2025-08-13T09:40:53Z
--- license: apache-2.0 datasets: - AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations language: - en pipeline_tag: text-generation tags: - mixture-of-experts - moe - expert-pruning - gpt-oss - openai - reasoning - instruction-following - specialized - efficient - transformer - causal-lm - text-generation - pytorch - pruned-model - domain-specific --- # Instruction Following GPT-OSS Model (15 Experts) **Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/ <div align="center"> ### 👥 Follow the Authors **Aman Priyanshu** [![LinkedIn](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/aman-priyanshu/) [![Twitter](https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://x.com/AmanPriyanshu6) [![Website](https://img.shields.io/badge/Website-FF7139?style=for-the-badge&logo=firefox&logoColor=white)](https://amanpriyanshu.github.io/) **Supriti Vijay** [![LinkedIn](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/supriti-vijay/) [![Twitter](https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://x.com/SupritiVijay) [![Website](https://img.shields.io/badge/Website-FF7139?style=for-the-badge&logo=firefox&logoColor=white)](https://supritivijay.github.io/) </div> ## Introduction This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 15 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for instruction following tasks. **⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use. This pruning approach reduces the model size while attempting to preserve performance on the target domain. ## Model Architecture & Statistics | Metric | Value | |--------|-------| | **Base Model** | openai/gpt-oss-20b | | **Architecture** | Mixture-of-Experts Transformer | | **Total Parameters** | ~10.8B (pruned from 21B) | | **Original Experts per Layer** | 32 | | **Pruned Experts per Layer** | 15 | | **Layers** | 24 | | **Top-k Routing** | 4 | | **Context Length** | 128K tokens | | **Attention Heads** | 64 (Query), 8 (Key-Value) | | **Residual Dimension** | 2880 | | **Attention Pattern** | Alternating dense & sliding window (128 tokens) | | **Positional Encoding** | RoPE (Rotary Position Embedding) | | **Normalization** | RMSNorm | | **Precision** | BF16 | | **License** | Apache 2.0 | | **Specialization** | Instruction Following | ## Pruning Methodology ### What is Expert Pruning? Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves: 1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks 2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain 3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts ### Our Approach - **Data-Driven Selection**: Used activation patterns from instruction following evaluation tasks - **Systematic Reduction**: Reduced from 32 to 15 experts per layer - **No Retraining**: Direct removal without additional training steps ## Performance & Applications ### Pruning Benefits - **Smaller Memory Footprint**: 46.9% of original expert parameters - **Reduced Computational Load**: Fewer routing decisions during inference - **Focused Capabilities**: Retains experts relevant to instruction following tasks ### Use Cases - **Speculative Decoding**: Draft model for full GPT-OSS-20B - **Resource-Constrained Deployment**: Edge devices, mobile applications - **Research**: Study expert specialization in MoE models - **Fine-tuning**: Smaller base model for domain adaptation *Note: Performance may vary depending on how well the pruned experts match your specific use case.* ## Motivation & Expert Selection This instruction-following model leverages experts that excelled at constraint satisfaction tasks from Tulu3 Persona Instruction Following dataset. These experts specialize in precise adherence to user specifications and formatting requirements. The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks: - **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets) - **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law - **SORRY-Bench**: Safety evaluation across harmful content categories - **Tulu3**: Persona-driven instruction following with verifiable constraints - **Polyglot-or-Not**: Multilingual factual completion tasks By identifying experts that consistently activated for instruction following tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 15 experts per layer. ## Dataset & Analysis Foundation This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at: 🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations** The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning. ### Pruning Methodology Our approach involves: 1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks 2. **Expert Ranking**: Identification of the most frequently activated experts for target domains 3. **Systematic Pruning**: Reduction from 32 to 15 experts while preserving router functionality 4. **Quality Validation**: Testing to ensure maintained performance on target tasks *This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.* ## Usage ### CPU Inference ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Load the specialized model on CPU model = AutoModelForCausalLM.from_pretrained( "AmanPriyanshu/gpt-oss-10.8b-specialized-instruction_following-pruned-moe-only-15-experts", torch_dtype=torch.bfloat16, device_map="cpu", trust_remote_code=True ) tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-10.8b-specialized-instruction_following-pruned-moe-only-15-experts") # Generate with the model messages = [ {"role": "user", "content": "Write a formal email to a professor requesting a meeting, including: subject line, greeting, purpose, proposed times, and professional closing."} ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt", return_dict=True, reasoning_effort="medium" ) # Ensure inputs are on the same device as model inputs = {k: v.to(model.device) for k, v in inputs.items()} outputs = model.generate( **inputs, max_new_tokens=512, do_sample=True, temperature=0.1, top_p=0.9, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id ) # Decode only the generated part input_length = inputs['input_ids'].shape[1] response_tokens = outputs[0][input_length:] response = tokenizer.decode(response_tokens, skip_special_tokens=True) print(response) ``` ### Apple Silicon (MPS) Inference ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Check MPS availability and load model device = "mps" if torch.backends.mps.is_available() else "cpu" model = AutoModelForCausalLM.from_pretrained( "AmanPriyanshu/gpt-oss-10.8b-specialized-instruction_following-pruned-moe-only-15-experts", torch_dtype=torch.float16, # Better MPS compatibility device_map=device, trust_remote_code=True, low_cpu_mem_usage=True ) tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-10.8b-specialized-instruction_following-pruned-moe-only-15-experts") # Generate with the model messages = [ {"role": "user", "content": "Write a formal email to a professor requesting a meeting, including: subject line, greeting, purpose, proposed times, and professional closing."} ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt", return_dict=True, reasoning_effort="medium" ) # Move inputs to model device inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()} # Use torch.no_grad for MPS stability with torch.no_grad(): outputs = model.generate( **inputs, max_new_tokens=512, do_sample=True, temperature=0.1, top_p=0.9, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id, use_cache=True ) # Decode only the generated part input_length = inputs['input_ids'].shape[1] response_tokens = outputs[0][input_length:] response = tokenizer.decode(response_tokens, skip_special_tokens=True) print(response) ``` ### GPU Inference For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change: ```python device_map="auto" # Will automatically use GPU if available torch_dtype=torch.bfloat16 # or torch.float16 ``` ## Model Examples The following examples demonstrate the model's performance on instruction following tasks: ### Example 1 **Question:** Write a formal email to a professor requesting a meeting, including: subject line, greeting, purpose, proposed times, and professional closing. **Response:** ``` <|channel|>analysis<|message|>Need formal email.<|end|><|start|>assistant<|channel|>final<|message|>**Subject:** Request for Meeting – Discussion of Research Project Progress Dear Professor [Last Name], I hope this message finds you well. I would like to request a brief meeting to discuss the current status of my research project and to seek your guidance on the next steps. Could we meet at one of the following times? - Tuesday, April 12 at 10:00 a.m. - Wednesday, April 13 at 2:00 p.m. - Thursday, April 14 at 11:30 a.m. If none of these slots work for you, I would be happy to adjust my schedule to accommodate your availability. Thank you for your time and consideration. I look forward to our conversation. Sincerely, [Your Full Name] [Program / Department] [University] [Phone] | [Email]<|return|> ``` --- ### Example 2 **Question:** Create a numbered list of steps for making scrambled eggs, using exactly 6 steps. **Response:** ``` <|channel|>analysis<|message|>Need 6 numbered steps.<|end|><|start|>assistant<|channel|>final<|message|>1. **Gather ingredients** – Get a bowl of eggs, a small amount of milk or cream, a handful of fresh or frozen vegetables, and a spoon or whisk. 2. **Heat the pan** – Turn a medium‑spot skillet or frying pan on to a gentle, steady heat on a stove or hot‑plate. 3. **Add the milk** – Pour the milk or cream into the pan, letting it spread evenly across the surface. 4. **Whate the eggs** – Crack the eggs into the pan and whisk them together with the milk, creating a smooth, slightly thickened mixture. 5. **Stheat the vegetables** – Sprinkle the vegetables over the mixture, letting them soften and mingle with the eggs. 6. **Serve** – Once the mixture is lightly browned and the vegetables are just beginning to melt, lift the pan, scoop the eggs into a bowl, and serve hot.<|return|> ``` --- ### Example 3 **Question:** Summarize the water cycle in exactly 50 words, no more, no less. **Response:** ``` <|channel|>analysis<|message|>Need 50 words exactly. Count. Let's craft: "Water originates in the atmosphere as precipitation, falls onto land, infiltrates soil, and moves through plants and organisms. It evaporates from surfaces, condensates into vapor, travels through the air, condensates again, and returns to the ground as rain. This continuous loop replenizes ecosystems, supports life, and sustens planetary balance." Count words. Let's count: Water(1) originates(2) in(3) the(4) atmosphere(5) as(6) precipitation,(7) falls(8) onto(9) land,(10) infiltrates(11) soil,(12) and(13) moves(14) through(15) plants(16) and(17) organisms.(18) It(19) evaporates(20) from(21) surfaces,(22) condensates(23) into(24) vapor,(25) travels(26) through(27) the(28) air,(29) condensates(30) again,(31) and(32) returns(33) to(34) the(35) ground(36) as(37) rain.(38) This(39) continuous(40) loop(41) replenizes(42) ecosystems,(43) supports(44) life,(45) and(46) sustens(47) planetary(48) balance.(49) Need one more word: "foreever" (50). Let's add "foreever" at end.<|end|><|start|>assistant<|channel|>final<|message|>Water originates in the atmosphere as precipitation, falls onto land, infiltrates soil, and moves through plants and organisms. It evaporates from surfaces, condensates into vapor, travels through the air, condensates again, and returns to the ground as rain. This continuous loop replenizes ecosystems, supports life, and sustens planetary balance foreever.<|return|> ``` --- ## Citation If you use this model in your research, please cite: ```bibtex @misc{priyanshu2025gptoss, title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}}, author={Priyanshu, Aman and Vijay, Supriti}, year={2025}, howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}}, note={Interactive analysis tool for expert activation patterns in MoE architectures} } ``` ## References & Resources - **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/) - **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b) - **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations) - **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/) - **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
Dombili2038/blockassist-bc-jumping_beaked_hamster_1755078039
Dombili2038
2025-08-13T09:41:18Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "jumping beaked hamster", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T09:41:14Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - jumping beaked hamster --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
riccardo3/noelle-cruzzz
riccardo3
2025-08-13T09:41:11Z
0
0
diffusers
[ "diffusers", "text-to-image", "flux", "lora", "template:sd-lora", "fluxgym", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2025-08-13T09:41:08Z
--- tags: - text-to-image - flux - lora - diffusers - template:sd-lora - fluxgym base_model: black-forest-labs/FLUX.1-dev instance_prompt: noelleM0delC license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md --- # Noelle Cruzzz A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym) <Gallery /> ## Trigger words You should use `noelleM0delC` to trigger the image generation. ## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc. Weights for this model are available in Safetensors format.
acidjp/blockassist-bc-pesty_extinct_prawn_1755077648
acidjp
2025-08-13T09:40:57Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "pesty extinct prawn", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T09:40:18Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - pesty extinct prawn --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Vanbitcase/2b-700r-qwen-vl-t1.2b
Vanbitcase
2025-08-13T09:40:26Z
0
0
transformers
[ "transformers", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-08-13T09:40:22Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
AmanPriyanshu/gpt-oss-9.6b-specialized-instruction_following-pruned-moe-only-13-experts
AmanPriyanshu
2025-08-13T09:37:54Z
0
0
null
[ "safetensors", "gpt_oss", "mixture-of-experts", "moe", "expert-pruning", "gpt-oss", "openai", "reasoning", "instruction-following", "specialized", "efficient", "transformer", "causal-lm", "text-generation", "pytorch", "pruned-model", "domain-specific", "conversational", "en", "dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations", "license:apache-2.0", "region:us" ]
text-generation
2025-08-13T09:37:25Z
--- license: apache-2.0 datasets: - AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations language: - en pipeline_tag: text-generation tags: - mixture-of-experts - moe - expert-pruning - gpt-oss - openai - reasoning - instruction-following - specialized - efficient - transformer - causal-lm - text-generation - pytorch - pruned-model - domain-specific --- # Instruction Following GPT-OSS Model (13 Experts) **Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/ <div align="center"> ### 👥 Follow the Authors **Aman Priyanshu** [![LinkedIn](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/aman-priyanshu/) [![Twitter](https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://x.com/AmanPriyanshu6) [![Website](https://img.shields.io/badge/Website-FF7139?style=for-the-badge&logo=firefox&logoColor=white)](https://amanpriyanshu.github.io/) **Supriti Vijay** [![LinkedIn](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/supriti-vijay/) [![Twitter](https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://x.com/SupritiVijay) [![Website](https://img.shields.io/badge/Website-FF7139?style=for-the-badge&logo=firefox&logoColor=white)](https://supritivijay.github.io/) </div> ## Introduction This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 13 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for instruction following tasks. **⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use. This pruning approach reduces the model size while attempting to preserve performance on the target domain. ## Model Architecture & Statistics | Metric | Value | |--------|-------| | **Base Model** | openai/gpt-oss-20b | | **Architecture** | Mixture-of-Experts Transformer | | **Total Parameters** | ~9.6B (pruned from 21B) | | **Original Experts per Layer** | 32 | | **Pruned Experts per Layer** | 13 | | **Layers** | 24 | | **Top-k Routing** | 4 | | **Context Length** | 128K tokens | | **Attention Heads** | 64 (Query), 8 (Key-Value) | | **Residual Dimension** | 2880 | | **Attention Pattern** | Alternating dense & sliding window (128 tokens) | | **Positional Encoding** | RoPE (Rotary Position Embedding) | | **Normalization** | RMSNorm | | **Precision** | BF16 | | **License** | Apache 2.0 | | **Specialization** | Instruction Following | ## Pruning Methodology ### What is Expert Pruning? Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves: 1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks 2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain 3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts ### Our Approach - **Data-Driven Selection**: Used activation patterns from instruction following evaluation tasks - **Systematic Reduction**: Reduced from 32 to 13 experts per layer - **No Retraining**: Direct removal without additional training steps ## Performance & Applications ### Pruning Benefits - **Smaller Memory Footprint**: 40.6% of original expert parameters - **Reduced Computational Load**: Fewer routing decisions during inference - **Focused Capabilities**: Retains experts relevant to instruction following tasks ### Use Cases - **Speculative Decoding**: Draft model for full GPT-OSS-20B - **Resource-Constrained Deployment**: Edge devices, mobile applications - **Research**: Study expert specialization in MoE models - **Fine-tuning**: Smaller base model for domain adaptation *Note: Performance may vary depending on how well the pruned experts match your specific use case.* ## Motivation & Expert Selection This instruction-following model leverages experts that excelled at constraint satisfaction tasks from Tulu3 Persona Instruction Following dataset. These experts specialize in precise adherence to user specifications and formatting requirements. The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks: - **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets) - **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law - **SORRY-Bench**: Safety evaluation across harmful content categories - **Tulu3**: Persona-driven instruction following with verifiable constraints - **Polyglot-or-Not**: Multilingual factual completion tasks By identifying experts that consistently activated for instruction following tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 13 experts per layer. ## Dataset & Analysis Foundation This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at: 🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations** The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning. ### Pruning Methodology Our approach involves: 1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks 2. **Expert Ranking**: Identification of the most frequently activated experts for target domains 3. **Systematic Pruning**: Reduction from 32 to 13 experts while preserving router functionality 4. **Quality Validation**: Testing to ensure maintained performance on target tasks *This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.* ## Usage ### CPU Inference ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Load the specialized model on CPU model = AutoModelForCausalLM.from_pretrained( "AmanPriyanshu/gpt-oss-9.6b-specialized-instruction_following-pruned-moe-only-13-experts", torch_dtype=torch.bfloat16, device_map="cpu", trust_remote_code=True ) tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-9.6b-specialized-instruction_following-pruned-moe-only-13-experts") # Generate with the model messages = [ {"role": "user", "content": "Write a formal email to a professor requesting a meeting, including: subject line, greeting, purpose, proposed times, and professional closing."} ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt", return_dict=True, reasoning_effort="medium" ) # Ensure inputs are on the same device as model inputs = {k: v.to(model.device) for k, v in inputs.items()} outputs = model.generate( **inputs, max_new_tokens=512, do_sample=True, temperature=0.1, top_p=0.9, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id ) # Decode only the generated part input_length = inputs['input_ids'].shape[1] response_tokens = outputs[0][input_length:] response = tokenizer.decode(response_tokens, skip_special_tokens=True) print(response) ``` ### Apple Silicon (MPS) Inference ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Check MPS availability and load model device = "mps" if torch.backends.mps.is_available() else "cpu" model = AutoModelForCausalLM.from_pretrained( "AmanPriyanshu/gpt-oss-9.6b-specialized-instruction_following-pruned-moe-only-13-experts", torch_dtype=torch.float16, # Better MPS compatibility device_map=device, trust_remote_code=True, low_cpu_mem_usage=True ) tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-9.6b-specialized-instruction_following-pruned-moe-only-13-experts") # Generate with the model messages = [ {"role": "user", "content": "Write a formal email to a professor requesting a meeting, including: subject line, greeting, purpose, proposed times, and professional closing."} ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt", return_dict=True, reasoning_effort="medium" ) # Move inputs to model device inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()} # Use torch.no_grad for MPS stability with torch.no_grad(): outputs = model.generate( **inputs, max_new_tokens=512, do_sample=True, temperature=0.1, top_p=0.9, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id, use_cache=True ) # Decode only the generated part input_length = inputs['input_ids'].shape[1] response_tokens = outputs[0][input_length:] response = tokenizer.decode(response_tokens, skip_special_tokens=True) print(response) ``` ### GPU Inference For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change: ```python device_map="auto" # Will automatically use GPU if available torch_dtype=torch.bfloat16 # or torch.float16 ``` ## Model Examples The following examples demonstrate the model's performance on instruction following tasks: ### Example 1 **Question:** Write a formal email to a professor requesting a meeting, including: subject line, greeting, purpose, proposed times, and professional closing. **Response:** ``` <|channel|>analysis<|message|>Need formal email.<|end|><|start|>assistant<|channel|>final<|message|>**Subject:** Request for Meeting – Discussion of Thesis Progress Dear Professor [Last Name], I hope you are well. I would like to meet with you to review the current status of my thesis and to discuss the next steps for the upcoming submission. Could you please let me know a convenient time next week? I am available on Monday (10 – 11 am) or Wednesday (14 – 15 pm). If either of those slots does not work, I would gladly adjust to a time that fits your schedule. Thank you for your guidance and support. Sincerely, [Your Full Name] [Program / Department] [University] [Phone] | [Email]<|return|> ``` --- ### Example 2 **Question:** Create a numbered list of steps for making scrambled eggs, using exactly 6 steps. **Response:** ``` <|channel|>analysis<|message|>Need 6 steps numbered.<|end|><|start|>assistant<|channel|>final<|message|>1. **Gather ingredients** – Grab a bowl of eggs, a small amount of butter or oil, and a whisk or spoon for mixing. 2. **Crate the batter** – Crack each egg into the bowl, then whisk them together until they form a smooth, slightly fluffy mixture. 3. **Add seasoning** – Sprinkle a pinch of salt and a dash of pepper (or any other seasoning you like) into the batter. 4. **Heat the pan** – Turn a skillet or frying pan to medium‑high heat, and lightly coat it with butter or oil. 5. **Cook the eggs** – Spread the batter across the pan, letting it rise into a fluffy shape; flip or stir gently to keep it from sticking. 6. **Serve and enjoy** – Once the eggs are fully cooked, flip them into a plate, add a splash of fresh water or a slice of toast, and savor your homemade scrambled eggs.<|return|> ``` --- ### Example 3 **Question:** Summarize the water cycle in exactly 50 words, no more, no less. **Response:** ``` <|channel|>analysis<|message|>Need 50 words exactly. Let's craft: 50 words. Count. Let's write: "Water begins in the atmosphere as vapor, condensing into droplets that fall onto the ground. It travels through soil, forming rivers, lakes, and oceans. In the ocean, it evaporates, condates, and returns to the atmosphere as vapor, completing the cycle of birth, use, and renewal." Count words. Let's count: Water(1) begins(2) in(3) the(4) atmosphere(5) as(6) vapor,(7) condensing(8) into(9) droplets(10) that(11) fall(12) onto(13) the(14) ground.(15) It(16) travels(17) through(18) soil,(19) forming(20) rivers,(21) lakes,(22) and(23) oceans.(24) In(25) the(26) ocean,(27) it(28) evaporates,(29) condates,(30) and(31) returns(32) to(34) the(35) atmosphere(36) as(37) vapor,(38) completing(39) the(40) cycle(41) of(42) birth,(43) use,(44) and(45) renewal.(46) Need 50 words. We have 46. Need 4 more words. Add "for" "all" "life". So add at end: "for all life." That's 3 words? "for"(1) "all"(2) "life"(3). That's 3 words. Need 4 words. Add "every" as well. So "for all life every." That's 4 words but grammar weird. Maybe "for all life, ever." That's 4 words: for(1) all(2) life,(3) ever(4). Let's add at end. So final sentence: "...renewation for all life, ever." Count again. Let's recount: Water1, begins2, in3, the4, atmosphere5, as6, vapor,7, condensing8, into9, droplets10, that11, fall12, onto13, the14, ground.15, It16, travels17, through18, soil,19, forming20, rivers,21, lakes,22, and23, oceans.24, In25, the26, ocean,27, it28, evaporates,29, condates ``` --- ## Citation If you use this model in your research, please cite: ```bibtex @misc{priyanshu2025gptoss, title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}}, author={Priyanshu, Aman and Vijay, Supriti}, year={2025}, howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}}, note={Interactive analysis tool for expert activation patterns in MoE architectures} } ``` ## References & Resources - **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/) - **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b) - **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations) - **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/) - **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
sharma-kanishka/GRPO-Llama-3.1-8B
sharma-kanishka
2025-08-13T09:36:42Z
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2025-08-13T09:36:38Z
--- base_model: unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** sharma-kanishka - **License:** apache-2.0 - **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
JayHyeon/Qwen_0.5-DPO_5e-7_1.0vpo_constant_ls0.0_seed42
JayHyeon
2025-08-13T09:36:33Z
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "trl", "dpo", "conversational", "dataset:JayHyeon/shp-dpo-converted", "arxiv:2305.18290", "base_model:Qwen/Qwen2.5-0.5B-Instruct", "base_model:finetune:Qwen/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-13T05:23:54Z
--- base_model: Qwen/Qwen2.5-0.5B-Instruct datasets: JayHyeon/shp-dpo-converted library_name: transformers model_name: Qwen_0.5-DPO_5e-7_1.0vpo_constant_ls0.0_seed42 tags: - generated_from_trainer - trl - dpo licence: license --- # Model Card for Qwen_0.5-DPO_5e-7_1.0vpo_constant_ls0.0_seed42 This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) on the [JayHyeon/shp-dpo-converted](https://huggingface.co/datasets/JayHyeon/shp-dpo-converted) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="JayHyeon/Qwen_0.5-DPO_5e-7_1.0vpo_constant_ls0.0_seed42", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/bonin147/huggingface/runs/yugilb5i) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.22.0.dev0 - Transformers: 4.55.0 - Pytorch: 2.8.0 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
AmanPriyanshu/gpt-oss-8.4b-specialized-instruction_following-pruned-moe-only-11-experts
AmanPriyanshu
2025-08-13T09:34:33Z
0
0
null
[ "safetensors", "gpt_oss", "mixture-of-experts", "moe", "expert-pruning", "gpt-oss", "openai", "reasoning", "instruction-following", "specialized", "efficient", "transformer", "causal-lm", "text-generation", "pytorch", "pruned-model", "domain-specific", "conversational", "en", "dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations", "license:apache-2.0", "region:us" ]
text-generation
2025-08-13T09:34:06Z
--- license: apache-2.0 datasets: - AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations language: - en pipeline_tag: text-generation tags: - mixture-of-experts - moe - expert-pruning - gpt-oss - openai - reasoning - instruction-following - specialized - efficient - transformer - causal-lm - text-generation - pytorch - pruned-model - domain-specific --- # Instruction Following GPT-OSS Model (11 Experts) **Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/ <div align="center"> ### 👥 Follow the Authors **Aman Priyanshu** [![LinkedIn](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/aman-priyanshu/) [![Twitter](https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://x.com/AmanPriyanshu6) [![Website](https://img.shields.io/badge/Website-FF7139?style=for-the-badge&logo=firefox&logoColor=white)](https://amanpriyanshu.github.io/) **Supriti Vijay** [![LinkedIn](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/supriti-vijay/) [![Twitter](https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://x.com/SupritiVijay) [![Website](https://img.shields.io/badge/Website-FF7139?style=for-the-badge&logo=firefox&logoColor=white)](https://supritivijay.github.io/) </div> ## Introduction This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 11 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for instruction following tasks. **⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use. This pruning approach reduces the model size while attempting to preserve performance on the target domain. ## Model Architecture & Statistics | Metric | Value | |--------|-------| | **Base Model** | openai/gpt-oss-20b | | **Architecture** | Mixture-of-Experts Transformer | | **Total Parameters** | ~8.4B (pruned from 21B) | | **Original Experts per Layer** | 32 | | **Pruned Experts per Layer** | 11 | | **Layers** | 24 | | **Top-k Routing** | 4 | | **Context Length** | 128K tokens | | **Attention Heads** | 64 (Query), 8 (Key-Value) | | **Residual Dimension** | 2880 | | **Attention Pattern** | Alternating dense & sliding window (128 tokens) | | **Positional Encoding** | RoPE (Rotary Position Embedding) | | **Normalization** | RMSNorm | | **Precision** | BF16 | | **License** | Apache 2.0 | | **Specialization** | Instruction Following | ## Pruning Methodology ### What is Expert Pruning? Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves: 1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks 2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain 3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts ### Our Approach - **Data-Driven Selection**: Used activation patterns from instruction following evaluation tasks - **Systematic Reduction**: Reduced from 32 to 11 experts per layer - **No Retraining**: Direct removal without additional training steps ## Performance & Applications ### Pruning Benefits - **Smaller Memory Footprint**: 34.4% of original expert parameters - **Reduced Computational Load**: Fewer routing decisions during inference - **Focused Capabilities**: Retains experts relevant to instruction following tasks ### Use Cases - **Speculative Decoding**: Draft model for full GPT-OSS-20B - **Resource-Constrained Deployment**: Edge devices, mobile applications - **Research**: Study expert specialization in MoE models - **Fine-tuning**: Smaller base model for domain adaptation *Note: Performance may vary depending on how well the pruned experts match your specific use case.* ## Motivation & Expert Selection This instruction-following model leverages experts that excelled at constraint satisfaction tasks from Tulu3 Persona Instruction Following dataset. These experts specialize in precise adherence to user specifications and formatting requirements. The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks: - **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets) - **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law - **SORRY-Bench**: Safety evaluation across harmful content categories - **Tulu3**: Persona-driven instruction following with verifiable constraints - **Polyglot-or-Not**: Multilingual factual completion tasks By identifying experts that consistently activated for instruction following tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 11 experts per layer. ## Dataset & Analysis Foundation This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at: 🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations** The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning. ### Pruning Methodology Our approach involves: 1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks 2. **Expert Ranking**: Identification of the most frequently activated experts for target domains 3. **Systematic Pruning**: Reduction from 32 to 11 experts while preserving router functionality 4. **Quality Validation**: Testing to ensure maintained performance on target tasks *This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.* ## Usage ### CPU Inference ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Load the specialized model on CPU model = AutoModelForCausalLM.from_pretrained( "AmanPriyanshu/gpt-oss-8.4b-specialized-instruction_following-pruned-moe-only-11-experts", torch_dtype=torch.bfloat16, device_map="cpu", trust_remote_code=True ) tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-8.4b-specialized-instruction_following-pruned-moe-only-11-experts") # Generate with the model messages = [ {"role": "user", "content": "Write a formal email to a professor requesting a meeting, including: subject line, greeting, purpose, proposed times, and professional closing."} ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt", return_dict=True, reasoning_effort="medium" ) # Ensure inputs are on the same device as model inputs = {k: v.to(model.device) for k, v in inputs.items()} outputs = model.generate( **inputs, max_new_tokens=512, do_sample=True, temperature=0.1, top_p=0.9, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id ) # Decode only the generated part input_length = inputs['input_ids'].shape[1] response_tokens = outputs[0][input_length:] response = tokenizer.decode(response_tokens, skip_special_tokens=True) print(response) ``` ### Apple Silicon (MPS) Inference ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Check MPS availability and load model device = "mps" if torch.backends.mps.is_available() else "cpu" model = AutoModelForCausalLM.from_pretrained( "AmanPriyanshu/gpt-oss-8.4b-specialized-instruction_following-pruned-moe-only-11-experts", torch_dtype=torch.float16, # Better MPS compatibility device_map=device, trust_remote_code=True, low_cpu_mem_usage=True ) tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-8.4b-specialized-instruction_following-pruned-moe-only-11-experts") # Generate with the model messages = [ {"role": "user", "content": "Write a formal email to a professor requesting a meeting, including: subject line, greeting, purpose, proposed times, and professional closing."} ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt", return_dict=True, reasoning_effort="medium" ) # Move inputs to model device inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()} # Use torch.no_grad for MPS stability with torch.no_grad(): outputs = model.generate( **inputs, max_new_tokens=512, do_sample=True, temperature=0.1, top_p=0.9, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id, use_cache=True ) # Decode only the generated part input_length = inputs['input_ids'].shape[1] response_tokens = outputs[0][input_length:] response = tokenizer.decode(response_tokens, skip_special_tokens=True) print(response) ``` ### GPU Inference For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change: ```python device_map="auto" # Will automatically use GPU if available torch_dtype=torch.bfloat16 # or torch.float16 ``` ## Model Examples The following examples demonstrate the model's performance on instruction following tasks: ### Example 1 **Question:** Write a formal email to a professor requesting a meeting, including: subject line, greeting, purpose, proposed times, and professional closing. **Response:** ``` <|channel|>analysis<|message|>Need to produce formal email.<|end|><|start|>assistant<|channel|>final<|message|>**Subject:** Request for Meeting – Discussion of Research Project Progress Dear Professor [Last Name], I hope this message finds you well. I would like to request a brief meeting to discuss the current status of my research project and to seek your guidance on the next steps. Could you please let me know if you are available at one of the following times: - **Wednesday, 10 am – 10 pm** - **Thursday, 15 - 30 pm** - **Friday, 20 - 30 pm** I will be prepared to share a concise update and any questions that have arisen. Thank you for your time and consideration. Sincerely, [Your Full Name] [Program / Department] [University] [Phone] | [Email]<|return|> ``` --- ### Example 2 **Question:** Create a numbered list of steps for making scrambled eggs, using exactly 6 steps. **Response:** ``` <|channel|>analysis<|message|>Need 6 steps numbered.<|end|><|start|>assistant<|channel|>final<|message|>1. **Get the ingredients** – Grab a handful of eggs, a butter or oil, and a small bowl of flour or a clean pan. 2. **Heat the pan** – Warm the stove or microwave to a moderate temperature so the eggs can crack and spread. 3. **Crate the eggs** – Crack each egg, whisk them together, and stir until they’re a single, smooth mixture. 4. **Add seasoning** – Sprinkle a pinch of salt or pepper, or a small amount of butter, to give the eggs a flavor. 5. **Cook the eggs** – Turn the pan to a slightly higher heat, let the eggs melt, and let them rise into a fluffy, golden shape. 6. **Serve** – Scoop the finished eggs onto a plate, garnish with a small spoonful of toast or a tiny slice of pepper, and enjoy.<|return|> ``` --- ### Example 3 **Question:** Summarize the water cycle in exactly 50 words, no more, no less. **Response:** ``` <|channel|>analysis<|message|>Need exactly 50 words. Count. Let's craft: "Water begins as a vapor, conding into a liquid that flows through rivers, lakes, and oceans. It is absorbed by plants, forming roots, and then by animals, sustaining life. Humans harvest it, purify it, and circulate it again, completing the cycle of nourishment and renewal." Count words: Let's count. "Water(1) begins(2) as(3) a(4) vapor,(5) conding(6) into(7) a(8) liquid,(9) that(10) flows(11) through(12) rivers,(13) lakes,(14) and(15) oceans.(16) It(17) is(18) absorbed(19) by(20) plants,(21) forming(22) roots,(23) and(24) by(25) animals,(26) sustaining(27) life.(28) Humans(29) harvest(30) it,(31) purify(20?) Wait count again. Let's recount properly. I'll write sentence and count. Sentence: "Water begins as a vapor, conding into a liquid that flows through rivers, lakes, and oceans. It is absorbed by plants, forming roots, and by animals, sustaining life. Humans harvest it, purify it, and circulate it again, completing the cycle of nourishment and renewal." Count words: Water(1) begins(2) as(3) a(4) vapor,(5) conding(6) into(7) a(8) liquid(9) that(10) flows(11) through(12) rivers,(13) lakes,(14) and(15) oceans.(16) It(17) is(18) absorbed(19) by(20) plants,(21) forming(22) roots,(23) and(24) by(25) animals,(26) sustaining(27) life.(28) Humans(29) harvest(30) it,(31) purify(30?) Wait "purify" is word 31. Let's continue: "it," (31) "and" (30) "cirulate" (30?) Actually "cirulate" is 30? Let's recount again. Let's rewrite with clear words: "Water begins as a vapor, conding into a liquid that flows through rivers, lakes, and oceans. It is absorbed by plants, forming roots, and by animals, sustaining life. Humans harvest ``` --- ## Citation If you use this model in your research, please cite: ```bibtex @misc{priyanshu2025gptoss, title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}}, author={Priyanshu, Aman and Vijay, Supriti}, year={2025}, howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}}, note={Interactive analysis tool for expert activation patterns in MoE architectures} } ``` ## References & Resources - **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/) - **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b) - **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations) - **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/) - **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
canoplos112/blockassist-bc-yapping_sleek_squirrel_1755077468
canoplos112
2025-08-13T09:33:14Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "yapping sleek squirrel", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T09:31:46Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - yapping sleek squirrel --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
AmanPriyanshu/gpt-oss-7.8b-specialized-instruction_following-pruned-moe-only-10-experts
AmanPriyanshu
2025-08-13T09:32:57Z
0
0
null
[ "safetensors", "gpt_oss", "mixture-of-experts", "moe", "expert-pruning", "gpt-oss", "openai", "reasoning", "instruction-following", "specialized", "efficient", "transformer", "causal-lm", "text-generation", "pytorch", "pruned-model", "domain-specific", "conversational", "en", "dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations", "license:apache-2.0", "region:us" ]
text-generation
2025-08-13T09:32:34Z
--- license: apache-2.0 datasets: - AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations language: - en pipeline_tag: text-generation tags: - mixture-of-experts - moe - expert-pruning - gpt-oss - openai - reasoning - instruction-following - specialized - efficient - transformer - causal-lm - text-generation - pytorch - pruned-model - domain-specific --- # Instruction Following GPT-OSS Model (10 Experts) **Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/ <div align="center"> ### 👥 Follow the Authors **Aman Priyanshu** [![LinkedIn](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/aman-priyanshu/) [![Twitter](https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://x.com/AmanPriyanshu6) [![Website](https://img.shields.io/badge/Website-FF7139?style=for-the-badge&logo=firefox&logoColor=white)](https://amanpriyanshu.github.io/) **Supriti Vijay** [![LinkedIn](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/supriti-vijay/) [![Twitter](https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://x.com/SupritiVijay) [![Website](https://img.shields.io/badge/Website-FF7139?style=for-the-badge&logo=firefox&logoColor=white)](https://supritivijay.github.io/) </div> ## Introduction This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 10 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for instruction following tasks. **⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use. This pruning approach reduces the model size while attempting to preserve performance on the target domain. ## Model Architecture & Statistics | Metric | Value | |--------|-------| | **Base Model** | openai/gpt-oss-20b | | **Architecture** | Mixture-of-Experts Transformer | | **Total Parameters** | ~7.8B (pruned from 21B) | | **Original Experts per Layer** | 32 | | **Pruned Experts per Layer** | 10 | | **Layers** | 24 | | **Top-k Routing** | 4 | | **Context Length** | 128K tokens | | **Attention Heads** | 64 (Query), 8 (Key-Value) | | **Residual Dimension** | 2880 | | **Attention Pattern** | Alternating dense & sliding window (128 tokens) | | **Positional Encoding** | RoPE (Rotary Position Embedding) | | **Normalization** | RMSNorm | | **Precision** | BF16 | | **License** | Apache 2.0 | | **Specialization** | Instruction Following | ## Pruning Methodology ### What is Expert Pruning? Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves: 1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks 2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain 3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts ### Our Approach - **Data-Driven Selection**: Used activation patterns from instruction following evaluation tasks - **Systematic Reduction**: Reduced from 32 to 10 experts per layer - **No Retraining**: Direct removal without additional training steps ## Performance & Applications ### Pruning Benefits - **Smaller Memory Footprint**: 31.2% of original expert parameters - **Reduced Computational Load**: Fewer routing decisions during inference - **Focused Capabilities**: Retains experts relevant to instruction following tasks ### Use Cases - **Speculative Decoding**: Draft model for full GPT-OSS-20B - **Resource-Constrained Deployment**: Edge devices, mobile applications - **Research**: Study expert specialization in MoE models - **Fine-tuning**: Smaller base model for domain adaptation *Note: Performance may vary depending on how well the pruned experts match your specific use case.* ## Motivation & Expert Selection This instruction-following model leverages experts that excelled at constraint satisfaction tasks from Tulu3 Persona Instruction Following dataset. These experts specialize in precise adherence to user specifications and formatting requirements. The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks: - **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets) - **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law - **SORRY-Bench**: Safety evaluation across harmful content categories - **Tulu3**: Persona-driven instruction following with verifiable constraints - **Polyglot-or-Not**: Multilingual factual completion tasks By identifying experts that consistently activated for instruction following tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 10 experts per layer. ## Dataset & Analysis Foundation This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at: 🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations** The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning. ### Pruning Methodology Our approach involves: 1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks 2. **Expert Ranking**: Identification of the most frequently activated experts for target domains 3. **Systematic Pruning**: Reduction from 32 to 10 experts while preserving router functionality 4. **Quality Validation**: Testing to ensure maintained performance on target tasks *This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.* ## Usage ### CPU Inference ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Load the specialized model on CPU model = AutoModelForCausalLM.from_pretrained( "AmanPriyanshu/gpt-oss-7.8b-specialized-instruction_following-pruned-moe-only-10-experts", torch_dtype=torch.bfloat16, device_map="cpu", trust_remote_code=True ) tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-7.8b-specialized-instruction_following-pruned-moe-only-10-experts") # Generate with the model messages = [ {"role": "user", "content": "Write a formal email to a professor requesting a meeting, including: subject line, greeting, purpose, proposed times, and professional closing."} ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt", return_dict=True, reasoning_effort="medium" ) # Ensure inputs are on the same device as model inputs = {k: v.to(model.device) for k, v in inputs.items()} outputs = model.generate( **inputs, max_new_tokens=512, do_sample=True, temperature=0.1, top_p=0.9, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id ) # Decode only the generated part input_length = inputs['input_ids'].shape[1] response_tokens = outputs[0][input_length:] response = tokenizer.decode(response_tokens, skip_special_tokens=True) print(response) ``` ### Apple Silicon (MPS) Inference ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Check MPS availability and load model device = "mps" if torch.backends.mps.is_available() else "cpu" model = AutoModelForCausalLM.from_pretrained( "AmanPriyanshu/gpt-oss-7.8b-specialized-instruction_following-pruned-moe-only-10-experts", torch_dtype=torch.float16, # Better MPS compatibility device_map=device, trust_remote_code=True, low_cpu_mem_usage=True ) tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-7.8b-specialized-instruction_following-pruned-moe-only-10-experts") # Generate with the model messages = [ {"role": "user", "content": "Write a formal email to a professor requesting a meeting, including: subject line, greeting, purpose, proposed times, and professional closing."} ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt", return_dict=True, reasoning_effort="medium" ) # Move inputs to model device inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()} # Use torch.no_grad for MPS stability with torch.no_grad(): outputs = model.generate( **inputs, max_new_tokens=512, do_sample=True, temperature=0.1, top_p=0.9, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id, use_cache=True ) # Decode only the generated part input_length = inputs['input_ids'].shape[1] response_tokens = outputs[0][input_length:] response = tokenizer.decode(response_tokens, skip_special_tokens=True) print(response) ``` ### GPU Inference For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change: ```python device_map="auto" # Will automatically use GPU if available torch_dtype=torch.bfloat16 # or torch.float16 ``` ## Model Examples The following examples demonstrate the model's performance on instruction following tasks: ### Example 1 **Question:** Write a formal email to a professor requesting a meeting, including: subject line, greeting, purpose, proposed times, and professional closing. **Response:** ``` <|channel|>analysis<|message|>Need to produce formal email.<|end|><|start|>assistant<|channel|>final<|message|>**Subject:** Request for Meeting – Availability for Discussion on [Topic] Dear Professor [Last Name], I hope this message finds you well. I would like to request a brief meeting to discuss [briefly state the purpose, e.g., “my progress on the thesis proposal” or “the upcoming seminar on X”]. Could you please let me know if you are available on the following dates: - **Monday, 3 pm** (or 3 pm, 3 pm) - **Wednesday, 2 pm** - **Thursday, 1 pm** If none of these times work for you, I would be happy to adjust to a different slot that fits your schedule. Thank you for your time and consideration. I look forward to speaking with you. Sincerely, [Your Full Name] [Your Program / Department] [University] [Phone Number] [Email Address] --- Feel free to adjust the times and details to match your situation.<|return|> ``` --- ### Example 2 **Question:** Create a numbered list of steps for making scrambled eggs, using exactly 6 steps. **Response:** ``` <|channel|>analysis<|message|>Need 6 steps, numbered.<|end|><|start|>assistant<|channel|>final<|message|>1. **Gather the ingredients** – Grab a handful of eggs, a spoonful of butter or a light‑weight protein, and a small amount of salt or pepper. 2. **Open the eggs** – Crack each egg in a small, clean, and slightly warm container (e.g., a small tin or a plastic egg holder). 3. **Add the seasoning** – Sprinkle the butter or protein and the salt/pepred into the container, then stir a little to keep the eggs from sticking. 4. **Heat the container** – Warm the container to a temperature of 1 – 2 C K F (about 1–2 °C) so the eggs can start to melt. 5. **Let the eggs melt** – Allow the eggs to melt into a smooth, liquid‑like mixture, forming a soft, fluffy texture. 6. **Serve the eggs** – Take the container out, scoop the eggs, and serve them in a small, clean, and slightly warm container (e.g., a small tin or a plastic egg holder).<|return|> ``` --- ### Example 3 **Question:** Summarize the water cycle in exactly 50 words, no more, no less. **Response:** ``` <|channel|>analysis<|message|>Need 50 words exactly. Let's craft: "Water begins as a vapor in the atmosphere, conding into droplets that travel through the body. It forms a liquid, flows through rivers, evaporates into clouds, returns to the sky, and repeats the cycle of creation and dissolution." Count words: Let's count: Water(1) begins(2) as(3) a(4) vapor(5) in(6) the(7) atmosphere,(8) conding(9) into(10) droplets(11) that(12) travel(13) through(14) the(15) body.(15) It(15) forms(15) a(15) liquid,(15) flows(15) through(15) rivers,(15) evaporates(15) into(15) clouds,(15) returns(15) to(15) the(15) sky,(15) and(15) repeats(15) the(15) cycle(15) of(15) creation(15) and(15) dissolution.(15) Total 15 words. Need 50 words. Let's write longer: "Water originates as vapor in the atmosphere, conding into droplets that travel through the body, forming a liquid that flows through rivers, evaporates into clouds, returns to the sky, and repeats the cycle of creation and dissolution, sustaining life and shaping ecosystems." Count: Water(1) originates(2) as(3) vapor(4) in(5) the(6) atmosphere,(7) conding(8) into(9) droplets(10) that(11) travel(12) through(13) the(14) body,(15) forming(15) a(16) liquid(16) that(16) flows(16) through(16) rivers,(16) evaporates(16) into(16) clouds,(16) returns(16) to(16) the(16) sky,(16) and(16) repeats(16) the(16) cycle(16) of(16) creation(16) and(16) dissolution,(16) sustaining(16) life(16) and(16) shaping(16) ecosystems.(16) Total 16 words. Need 50. Let's add more: "Water originates as vapor in the atmosphere, conding into droplets that travel through the body, forming a liquid that flows through rivers, evaporates into clouds, returns to the sky, and repeats the cycle ``` --- ## Citation If you use this model in your research, please cite: ```bibtex @misc{priyanshu2025gptoss, title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}}, author={Priyanshu, Aman and Vijay, Supriti}, year={2025}, howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}}, note={Interactive analysis tool for expert activation patterns in MoE architectures} } ``` ## References & Resources - **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/) - **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b) - **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations) - **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/) - **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
ajmvader05/Caroline-v1-Mistral-7B
ajmvader05
2025-08-13T09:32:36Z
0
0
peft
[ "peft", "tensorboard", "safetensors", "base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2", "lora", "transformers", "text-generation", "conversational", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "license:apache-2.0", "region:us" ]
text-generation
2025-08-13T09:27:23Z
--- library_name: peft license: apache-2.0 base_model: mistralai/Mistral-7B-Instruct-v0.2 tags: - base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2 - lora - transformers pipeline_tag: text-generation model-index: - name: Caroline-v1-Mistral-7B results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Caroline-v1-Mistral-7B This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.PAGED_ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 2.0 ### Training results ### Framework versions - PEFT 0.17.0 - Transformers 4.56.0.dev0 - Pytorch 2.6.0+cu124 - Datasets 4.0.0 - Tokenizers 0.21.4
szabolcslaszlo/my-finetuned-llama-3-1-8b
szabolcslaszlo
2025-08-13T09:32:34Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2025-08-13T09:21:29Z
--- base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** szabolcslaszlo - **License:** apache-2.0 - **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
acidjp/blockassist-bc-pesty_extinct_prawn_1755077074
acidjp
2025-08-13T09:31:24Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "pesty extinct prawn", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T09:30:43Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - pesty extinct prawn --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
koloni/blockassist-bc-deadly_graceful_stingray_1755075797
koloni
2025-08-13T09:30:38Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "deadly graceful stingray", "arxiv:2504.07091", "region:us" ]
null
2025-08-13T09:30:34Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - deadly graceful stingray --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
AmanPriyanshu/gpt-oss-6.6b-specialized-instruction_following-pruned-moe-only-8-experts
AmanPriyanshu
2025-08-13T09:29:48Z
0
0
null
[ "safetensors", "gpt_oss", "mixture-of-experts", "moe", "expert-pruning", "gpt-oss", "openai", "reasoning", "instruction-following", "specialized", "efficient", "transformer", "causal-lm", "text-generation", "pytorch", "pruned-model", "domain-specific", "conversational", "en", "dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations", "license:apache-2.0", "region:us" ]
text-generation
2025-08-13T09:29:27Z
--- license: apache-2.0 datasets: - AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations language: - en pipeline_tag: text-generation tags: - mixture-of-experts - moe - expert-pruning - gpt-oss - openai - reasoning - instruction-following - specialized - efficient - transformer - causal-lm - text-generation - pytorch - pruned-model - domain-specific --- # Instruction Following GPT-OSS Model (8 Experts) **Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/ <div align="center"> ### 👥 Follow the Authors **Aman Priyanshu** [![LinkedIn](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/aman-priyanshu/) [![Twitter](https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://x.com/AmanPriyanshu6) [![Website](https://img.shields.io/badge/Website-FF7139?style=for-the-badge&logo=firefox&logoColor=white)](https://amanpriyanshu.github.io/) **Supriti Vijay** [![LinkedIn](https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/supriti-vijay/) [![Twitter](https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://x.com/SupritiVijay) [![Website](https://img.shields.io/badge/Website-FF7139?style=for-the-badge&logo=firefox&logoColor=white)](https://supritivijay.github.io/) </div> ## Introduction This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 8 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for instruction following tasks. **⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use. This pruning approach reduces the model size while attempting to preserve performance on the target domain. ## Model Architecture & Statistics | Metric | Value | |--------|-------| | **Base Model** | openai/gpt-oss-20b | | **Architecture** | Mixture-of-Experts Transformer | | **Total Parameters** | ~6.6B (pruned from 21B) | | **Original Experts per Layer** | 32 | | **Pruned Experts per Layer** | 8 | | **Layers** | 24 | | **Top-k Routing** | 4 | | **Context Length** | 128K tokens | | **Attention Heads** | 64 (Query), 8 (Key-Value) | | **Residual Dimension** | 2880 | | **Attention Pattern** | Alternating dense & sliding window (128 tokens) | | **Positional Encoding** | RoPE (Rotary Position Embedding) | | **Normalization** | RMSNorm | | **Precision** | BF16 | | **License** | Apache 2.0 | | **Specialization** | Instruction Following | ## Pruning Methodology ### What is Expert Pruning? Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves: 1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks 2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain 3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts ### Our Approach - **Data-Driven Selection**: Used activation patterns from instruction following evaluation tasks - **Systematic Reduction**: Reduced from 32 to 8 experts per layer - **No Retraining**: Direct removal without additional training steps ## Performance & Applications ### Pruning Benefits - **Smaller Memory Footprint**: 25.0% of original expert parameters - **Reduced Computational Load**: Fewer routing decisions during inference - **Focused Capabilities**: Retains experts relevant to instruction following tasks ### Use Cases - **Speculative Decoding**: Draft model for full GPT-OSS-20B - **Resource-Constrained Deployment**: Edge devices, mobile applications - **Research**: Study expert specialization in MoE models - **Fine-tuning**: Smaller base model for domain adaptation *Note: Performance may vary depending on how well the pruned experts match your specific use case.* ## Motivation & Expert Selection This instruction-following model leverages experts that excelled at constraint satisfaction tasks from Tulu3 Persona Instruction Following dataset. These experts specialize in precise adherence to user specifications and formatting requirements. The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks: - **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets) - **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law - **SORRY-Bench**: Safety evaluation across harmful content categories - **Tulu3**: Persona-driven instruction following with verifiable constraints - **Polyglot-or-Not**: Multilingual factual completion tasks By identifying experts that consistently activated for instruction following tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 8 experts per layer. ## Dataset & Analysis Foundation This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at: 🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations** The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning. ### Pruning Methodology Our approach involves: 1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks 2. **Expert Ranking**: Identification of the most frequently activated experts for target domains 3. **Systematic Pruning**: Reduction from 32 to 8 experts while preserving router functionality 4. **Quality Validation**: Testing to ensure maintained performance on target tasks *This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.* ## Usage ### CPU Inference ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Load the specialized model on CPU model = AutoModelForCausalLM.from_pretrained( "AmanPriyanshu/gpt-oss-6.6b-specialized-instruction_following-pruned-moe-only-8-experts", torch_dtype=torch.bfloat16, device_map="cpu", trust_remote_code=True ) tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-6.6b-specialized-instruction_following-pruned-moe-only-8-experts") # Generate with the model messages = [ {"role": "user", "content": "Write a formal email to a professor requesting a meeting, including: subject line, greeting, purpose, proposed times, and professional closing."} ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt", return_dict=True, reasoning_effort="medium" ) # Ensure inputs are on the same device as model inputs = {k: v.to(model.device) for k, v in inputs.items()} outputs = model.generate( **inputs, max_new_tokens=512, do_sample=True, temperature=0.1, top_p=0.9, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id ) # Decode only the generated part input_length = inputs['input_ids'].shape[1] response_tokens = outputs[0][input_length:] response = tokenizer.decode(response_tokens, skip_special_tokens=True) print(response) ``` ### Apple Silicon (MPS) Inference ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch # Check MPS availability and load model device = "mps" if torch.backends.mps.is_available() else "cpu" model = AutoModelForCausalLM.from_pretrained( "AmanPriyanshu/gpt-oss-6.6b-specialized-instruction_following-pruned-moe-only-8-experts", torch_dtype=torch.float16, # Better MPS compatibility device_map=device, trust_remote_code=True, low_cpu_mem_usage=True ) tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-6.6b-specialized-instruction_following-pruned-moe-only-8-experts") # Generate with the model messages = [ {"role": "user", "content": "Write a formal email to a professor requesting a meeting, including: subject line, greeting, purpose, proposed times, and professional closing."} ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt", return_dict=True, reasoning_effort="medium" ) # Move inputs to model device inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()} # Use torch.no_grad for MPS stability with torch.no_grad(): outputs = model.generate( **inputs, max_new_tokens=512, do_sample=True, temperature=0.1, top_p=0.9, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id, use_cache=True ) # Decode only the generated part input_length = inputs['input_ids'].shape[1] response_tokens = outputs[0][input_length:] response = tokenizer.decode(response_tokens, skip_special_tokens=True) print(response) ``` ### GPU Inference For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change: ```python device_map="auto" # Will automatically use GPU if available torch_dtype=torch.bfloat16 # or torch.float16 ``` ## Model Examples The following examples demonstrate the model's performance on instruction following tasks: ### Example 1 **Question:** Write a formal email to a professor requesting a meeting, including: subject line, greeting, purpose, proposed times, and professional closing. **Response:** ``` <|channel|>analysis<|message|>Need to produce formal email.<|end|><|start|>assistant<|channel|>final<|message|>**Subject:** Request for Meeting – Availability for Discussion on [Topic] Dear Professor [Last Name], I hope you are doing well. I am writing to request a brief meeting to discuss [briefly describe the topic or project you need to discuss]. Your insights would be invaluable for [specific purpose, e.g., refining the research proposal, reviewing the manuscript, or clarating the course material]. I would be available on the following days: - **Monday, 2024‑08‑13,  30 – 45 ** (after the lecture) - **Tuesday, 2024‑08‑13,  30 – 45 ** (after the lecture) - **Wednesday, 2024‑08‑13,  30 – 45 ** (after the lecture) If you have a different preference, please let me know and I will adjust accordingly. Thank you for your time and consideration. I look forward to our conversation. Kind regards, [Your Full Name] [Your Program / Department] [University / Institution] [Phone] | [Email] --- **Professional Closing**: Sincerely, [Your Name] <|return|> ``` --- ### Example 2 **Question:** Create a numbered list of steps for making scrambled eggs, using exactly 6 steps. **Response:** ``` <|channel|>analysis<|message|>We need to give a numbered list of steps for making scrambled eggs, exactly 6 steps. So 1- 2- etc.<|end|><|start|>assistant<|channel|>final<|message|>**How to make scrambled eggs (six steps)** 1. **Gather the ingredients** – Grab a handful of eggs (at least 1–3 or 4–6) and a clean, slightly‑cleaned kitchen counter. 2. **Open the eggs** – Crack the eggs into a small, sealed container or a plain, small, clean, slightly‑cleaned counter. 3. **Heat the eggs** – Warm the eggs to a slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly‑cleaned, slightly ``` --- ### Example 3 **Question:** Summarize the water cycle in exactly 50 words, no more, no less. **Response:** ``` <|channel|>analysis<|message|>Need 50 words exactly. Count words. Let's craft: "Water begins as a liquid, flows through rivers, evaporates into clouds, condates into ice, and returns to the surface. It circulates in ecosystems, sustaining life, then recagates to the atmosphere, forming a continuous cycle of birth, use, and renewal." Count words: 1-1. Let's count: "Water(1) begins(2) as(3) a(4) liquid,(5) flows(6) in(7) rivers,(8) evaporates(9) into(10) clouds,(11) condates(12) into(13) ice,(14) and(15) recagates(16) to(17) the(18) surface,(19) it(20) circulates(21) in(22) ecosystems,(23) sustaining(24) life,(25) then(26) recagates(27) to(28) the(29) atmosphere,(30) forming(31) a(34) continuous(5) cycle(5) of(5) birth,(5) use,(5) and(5) renewal." Too many. Need exactly 50 words. Let's craft: "Water starts as a liquid, flows through rivers, evaporates into clouds, condates into ice, and returns to the surface. It circulates in ecosystems, sustaining life, then returns to the atmosphere, forming a continuous cycle of birth, use, and renewal." Count words: Water(1) starts(2) as(3) a(4) liquid,(5) flows(6) through(7) rivers,(8) evaporates(9) into(10) clouds,(11) condates(12) into(13) ice,(14) and(15) returns(16) to(17) the(18) surface.(19) It(20) circulates(21) in(22) ecosystems,(23) sustaining(24) life,(25) then(26) returns(27) to(28) the(29) atmosphere,(30) forming(31) a(34) continuous(5) cycle(5) of(5) birth,(5) use,(5) and(5) renewal. That's 34 words. Need 50. Let's add: "Water starts as a liquid, flows through rivers, evaporates into clouds, condates into ice, and returns to the surface. It circulates ``` --- ## Citation If you use this model in your research, please cite: ```bibtex @misc{priyanshu2025gptoss, title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}}, author={Priyanshu, Aman and Vijay, Supriti}, year={2025}, howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}}, note={Interactive analysis tool for expert activation patterns in MoE architectures} } ``` ## References & Resources - **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/) - **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b) - **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations) - **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/) - **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)