β οΈ SECURITY RESEARCH - MALICIOUS MODEL POC
π¨ WARNING: DO NOT USE IN PRODUCTION
This is a proof-of-concept malicious model created for responsible security disclosure.
Purpose: Demonstrate arbitrary code execution vulnerability in Hugging Face Transformers
Program: Huntr Bug Bounty (MFV - Model File Vulnerabilities)
Severity: CRITICAL (CVSS 9.6)
Status: Responsibly disclosed
Vulnerability Details
Summary
This model contains a weaponized .metadata pickle file that executes arbitrary code when
loaded by the transformers library's OLMo3 conversion script without any user warning or
TRUST_REMOTE_CODE validation.
Affected Code
File: src/transformers/models/olmo3/convert_olmo3_weights_to_hf.py
Lines: 259-260
with (Path(model_path) / ".metadata").open("rb") as metadata_file:
metadata = pickle.load(metadata_file) # β UNSAFE - No validation!
keys = [key for key in metadata.state_dict_metadata.keys() if key.startswith("model.")]
Proof of Exploitation
When this model is converted using the official conversion script:
python src/transformers/models/olmo3/convert_olmo3_weights_to_hf.py \
--model-path ./malicious_olmo3_poc \
--output-dir ./converted
What happens:
- β
Conversion script loads
.metadatausingpickle.load() - β
Pickle exploit triggers via Python's
__reduce__magic method - β
Arbitrary code executes (creates
/tmp/huntr_poc_exploited.txt) - β NO WARNING shown to user
- β NO TRUST_REMOTE_CODE check required
Result: Full arbitrary code execution with process privileges.
Security Impact
CVSS Score: 9.6 (Critical)
Attack Vector: Network (AV:N)
Attack Complexity: Low (AC:L)
Privileges Required: None (PR:N)
User Interaction: Required (UI:R)
Scope: Changed (S:C)
Confidentiality: High (C:H)
Integrity: High (I:H)
Availability: High (A:H)
Impact:
- π΄ Arbitrary code execution at model load time
- π΄ Full system compromise
- π΄ Data exfiltration (SSH keys, credentials, API tokens)
- π΄ Persistent backdoor installation
- π΄ Supply chain attack vector
ProtectAI Scanner Bypass
This vulnerability bypasses HuggingFace's ProtectAI security scanner because:
- β Scanner focuses on
.pklweight files, not.metadatafiles - β Hidden file (starts with
.) often ignored by scanners - β Loaded by conversion scripts, not main model loading paths
- β No file extension indicating pickle format
- β No
TRUST_REMOTE_CODEvalidation in this code path
Result: Malicious model can be uploaded to HuggingFace Hub and bypass automated security checks.
Responsible Disclosure
Disclosure Timeline
- Discovery Date: October 6, 2025
- Disclosure Platform: Huntr (https://huntr.com)
- Program: Model File Vulnerabilities (MFV)
- Status: Reported to maintainers
- CVE: Pending assignment
Affected Versions
- β Hugging Face Transformers: All versions with OLMo3 support
- β Affected Models: OLMo3 checkpoints requiring conversion
Remediation
Immediate Fix:
# BEFORE (UNSAFE):
with (Path(model_path) / ".metadata").open("rb") as metadata_file:
metadata = pickle.load(metadata_file)
# AFTER (SAFE):
import json
with (Path(model_path) / ".metadata.json").open("r") as metadata_file:
metadata = json.load(metadata_file) # Safe - no code execution
Long-term Fix:
- Migrate all metadata to JSON format
- Deprecate pickle for model metadata
- Add
TRUST_REMOTE_CODEchecks if pickle is necessary - Update ProtectAI scanner to detect
.metadatafiles
Usage Instructions (For Security Researchers)
Testing the Vulnerability
# 1. Clone transformers repository
git clone https://github.com/huggingface/transformers.git
cd transformers
# 2. Clone this PoC model
git lfs install
git clone https://huggingface.co/<USERNAME>/malicious-olmo3-poc
# 3. Run conversion script (triggers exploit)
python src/transformers/models/olmo3/convert_olmo3_weights_to_hf.py \
--model-path ./malicious-olmo3-poc \
--output-dir ./converted
# 4. Verify exploitation
cat /tmp/huntr_poc_exploited.txt
# Expected output: "VULNERABILITY EXPLOITED - Arbitrary code executed via pickle deserialization"
What This Proves
β
Arbitrary code execution during model loading
β
No user warning or consent required
β
Bypasses security scanning
β
Affects real-world usage scenarios
Ethical Guidelines
β ALLOWED:
- Security research and testing in isolated environments
- Responsible disclosure through official channels
- Educational use to understand vulnerabilities
- Contributing fixes to open source projects
β PROHIBITED:
- Using this model in production environments
- Distributing without security context
- Executing on shared or production systems
- Using for malicious purposes
- Weaponizing with harmful payloads
Contact
Security Researcher: [Your Name/Handle]
Disclosure Platform: Huntr (https://huntr.com)
Project: Hugging Face Transformers
Maintainer Contact: security@huggingface.co
Acknowledgments
- Hugging Face team for Transformers library
- ProtectAI for security scanning infrastructure
- Huntr platform for responsible disclosure process
This model is for security research and responsible disclosure only.
Use in accordance with applicable laws and ethical guidelines.
License
This PoC is provided for security research under responsible disclosure principles. The Transformers library is licensed under Apache 2.0.
- Downloads last month
- 3