The dataset is currently empty. Upload or create new data files. Then, you will be able to explore them in the Dataset Viewer.
ColliderML Benchmark Results
Machine-scored leaderboard results for the ColliderML benchmark tasks.
Structure
results/
{task}/
{username}/
{predictions_sha256}.json
Each JSON file records one scored submission:
| Field | Description |
|---|---|
submission_id |
UUID assigned by the backend |
task |
Benchmark task name (e.g. tracking) |
submitter |
HuggingFace username |
model_repo_id |
Optional link to the submitter's model repo |
submitted_at |
ISO 8601 timestamp |
scores |
Dict of metric name → value |
credits_earned |
Credits awarded for this submission |
is_baseline |
Whether this is a reference baseline |
predictions_sha256 |
SHA-256 of the submitted parquet |
Tasks
| Task | Primary metric | Dataset |
|---|---|---|
tracking |
TrackML weighted efficiency | ttbar_pu200 |
jets |
Jet classification AUC | ttbar_pu200 |
anomaly |
Anomaly AUROC | mixed |
tracking_latency |
Inference latency (ms) | ttbar_pu200 |
tracking_small |
TrackML eff under parameter budget | ttbar_pu200 |
data_loading |
Load throughput (events/s) | ttbar_pu0 |
Integration
Results are pushed automatically by the ColliderML backend after each scored submission. The ColliderML leaderboard Space reads from this dataset to render the public table.
Models that include .eval_results/colliderml_*.yaml files link back to this dataset,
so scores appear on model cards automatically.
Links
- Library: OpenDataDetector/ColliderML
- Data: CERN/ColliderML-Release-1
- Docs: opendatadetector.github.io/ColliderML
- Downloads last month
- 56