OffSec RedTeam Codes
Token count: ~30B tokens.
OffSec RedTeam Codes is a curated corpus of code (and some auxiliary text) extracted from popular GitHub repositories related to offensive security / red teaming (pentesting, OSINT, C2, privilege escalation, exploitation, forensics, etc.). It is also the largest open-source dataset of red-team and offensive-security code ever compiled.
⚠️ Ethical use only. This dataset is for research, education, and defensive security testing in lawful environments with proper authorization. Do not use it to harm systems, violate terms of service, or break the law.
What’s new (Nov 2025)
- Parquet per‑topic configs are now published for fast loading with
datasets.load_dataset(...). raw/folder added: original JSON/JSONL/NDJSON sources moved underraw/(e.g.raw/github_topic_code-<topic>.jsonl).- All Parquet splits include a constant
topiccolumn. - Shards are sized at ≈256–512 MB to balance memory, upload speed, and downstream streaming.
Repository layout
/ # root
README.md # this card
raw/ # original source JSON/JSONL/NDJSON (one file per topic)
<topic>/ # per-topic Parquet config folder produced by HF Datasets
train-0000X-of-00N.parquet
Schema
All examples share the same columns (Parquet + raw JSON):
- content (string) — full file text (notebook outputs stripped;
execution_countcleared) - repo_name (string) —
owner/name - repo_id (int) — GitHub repo id
- path (string) — file path in repo
- branch (string) — default branch at fetch time
- license (string|null) — SPDX id or GitHub license key
- lang (string) — approx language/extension
- stargazers (int) — star snapshot
- pushed_at (string, ISO‑8601) — last push timestamp
- topics_matched (list[string]) — discovery topics that matched this repo
- size (int) — character length of
content - topic (string) — added during build, equals the topic/config name
Load the Parquet configs (recommended)
Single topic
from datasets import load_dataset
# Loads Parquet for the selected topic (config name)
ds = load_dataset(
"tandevllc/offsec_redteam_codes",
name="pentesting", # pick any topic listed above
split="train",
)
print(len(ds), ds.column_names)
Multiple topics (interleaved)
from datasets import load_dataset, interleave_datasets
names = ["osint", "pentesting", "reverse-engineering"]
parts = [load_dataset("tandevllc/offsec_redteam_codes", name=n, split="train") for n in names]
# Optionally balance by size:
probs = [1/len(parts)]*len(parts)
ds = interleave_datasets(parts, probabilities=probs, seed=42)
Filter by language or license
py = ds.filter(lambda r: r.get("lang") == "py")
permissive = ds.filter(lambda r: (r.get("license") or "").lower() in {"mit","apache-2.0","bsd-3-clause"})
Load the raw JSON/JSONL sources (optional)
If you prefer the original line-delimited JSON:
from datasets import load_dataset
raw = load_dataset(
"json",
data_files="raw/github_topic_code-pentesting.jsonl",
split="train",
repo_id="tandevllc/offsec_redteam_codes",
)
Content selection and filtering
- File types: code-like extensions only (
py,pyw,pyi,js,mjs,cjs,ts,tsx,jsx,java,kt,kts,groovy,scala,c,h,cpp,cc,cxx,hpp,hh,hxx,ino,rs,go,rb,php,phtml,cs,fs,vb,swift,m,mm,sh,bash,zsh,ksh,fish,ps1,psd1,psm1,sql,pl,pm,r,jl,lua,dart,zig,nim,yaml,yml,toml,ini,cfg,conf,env,properties,gradle,cmake,make,mk,mkfile,json,ipynb). - Excluded paths: build outputs, vendored folders, caches, VCS dirs, virtualenvs
- Binaries/archives/media: filtered by extension + text heuristics
- Large files: default cap ≈2 MB (configurable)
- Minified blobs: excluded by average line-length heuristics
- Secrets: basic regex heuristics (keys, private certs) excluded where detected
Intended uses
- Pretraining / continued pretraining of code LMs in the security domain
- RAG / retrieval with metadata filters (language, topics, license, stars)
- Code search / similarity tasks
- Curriculum sampling by
lang,stargazers,topic, or recency (pushed_at)
This dataset is not an instruction set for unauthorized intrusion. Use defensively and ethically.
Limitations & caveats
- Licenses vary per file. The compilation license does not replace upstream licenses.
- Heuristics can omit useful files or miss some undesirable ones.
- Time-varying metadata (stars, topics, default branch) reflect the crawl moment.
License & Access
License: "TanDev Proprietary License — All Rights Reserved"
Upstream Licenses: Each record includes source_repo and source_license. Rights in the original repositories remain with their owners. This compilation license does not limit rights you already have in the upstream sources.
If you are a repository author and would like your content removed, please open an issue or contact the maintainer.
Citation
@dataset{tandevllc_2025_offsec_redteam_codes,
author = {Gupta, Smridh},
title = {OffSec RedTeam Codes},
year = {2025},
url = {https://huggingface.co/datasets/tandevllc/offsec_redteam_codes}
}
Maintainer
Smridh Gupta — smridh@tandev.us
- Downloads last month
- 163