File size: 7,060 Bytes
a66b2f4
 
b950166
a66b2f4
cdbc011
 
b950166
 
 
41f5b43
a66b2f4
 
b3c330f
 
4a3e01e
b3c330f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dc3f1a0
b3c330f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dc518c2
 
b3c330f
 
dc518c2
 
 
 
 
 
 
 
 
 
 
b3c330f
dc518c2
 
 
 
 
 
 
b3c330f
dc518c2
 
 
 
 
 
 
b3c330f
 
dc518c2
b3c330f
dc518c2
b3c330f
dc518c2
b3c330f
dc518c2
b3c330f
dc518c2
 
 
 
 
 
 
 
 
 
 
 
 
b3c330f
 
 
 
 
 
 
 
dc518c2
b3c330f
dc518c2
b3c330f
dc518c2
b3c330f
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
---
configs:
- config_name: torchbench
  data_files:
  - split: benchmark
    path: "backend_bench_problems.parquet"
- config_name: ops_traces_models
  data_files:
  - split: operator_input_models
    path: "operator_input_models_mapping.parquet"
---

# TorchBench

The TorchBench suite of [BackendBench](https://github.com/meta-pytorch/BackendBench) is designed to mimic real-world use cases. It provides operators and inputs derived from 155 model traces found in [TIMM](https://huggingface.co/timm) (67), [Hugging Face Transformers](https://huggingface.co/docs/transformers/en/index) (45), and [TorchBench](https://github.com/pytorch/benchmark) (43). (These are also the models PyTorch developers use to [validate performance](https://hud.pytorch.org/benchmark/compilers).) You can view the origin of these traces by switching the subset in the dataset viewer to `ops_traces_models` and `torchbench` for the full dataset.

When running BackendBench, much of the extra information about what you are testing is abstracted away, so you can simply run `uv run python --suite torchbench ...`. Here, however, we provide the test suite as a dataset that can be explored directly. It includes details about why certain operations and arguments were included or excluded, reflecting the careful consideration behind curating the set.

You can download the dataset in either format:

- `backend_bench_problems.parquet` (default format on Hugging Face)
    
- `backend_bench_problems.json` (more human-readable)
    

### Fields

- **uuid** – Unique identifier for the `(op_name, args)` pair.
    
- **op_name** – Full name of the operator being tested.
    
- **args** – Serialized form of the inputs from the trace. [See details below](#serialized-arguments-in-backendbench).
    
- **runnable** – Whether the operator is runnable in BackendBench (some are not yet supported).
    
- **included_in_benchmark** – Whether this `(op_name, args)` pair is tested in the TorchBench suite.
    
- **why_excluded** – If not included, a list of reasons for exclusion (e.g., "BackendBench does not support correctness testing for random ops yet," "BackendBench does not support correctness testing for tensor creation and manipulation ops yet").
    
- **is_synthetic** – Marks synthetically generated inputs (e.g., very large tensors). These are currently excluded from the benchmark.
    
- **runtime_ms** – Execution time (ms) on our hardware (single GPU from a machine with 8× H100s and an AMD EPYC 9654 96-core processor).
    
- **relative_runtime_to_kernel_launch** – `runtime_ms` divided by the runtime of a dummy CUDA op (`torch.empty(0, device=cuda)`), representing launch overhead.
    
- **is_overhead_dominated_op** – Flags operator/argument pairs running close to CUDA overhead as “performance canaries.” [Histogram analysis](https://github.com/meta-pytorch/BackendBench/issues/108) showed that a 1.3× threshold above CUDA overhead is a useful cutoff. These tests can be run for sanity-checking kernels with `uv run python --suite torchbench --check-overhead-dominated-ops ...`.
    
- **count** – Number of times this operator/input pair appeared in model traces.
    
- **in_models** – List of models (from real-world traces) where this operator/input pair appears.
    
- **in_models_count** – Number of distinct models in which this operator/input pair occurs.
    

# Serialized Arguments in BackendBench

Generally, arguments are serialized by storing tensor shapes and preserving everything else as it's fairly intuitive. For example:

`((T([8, 8, 8, 8, 8], f16), T([8, 8, 8, 8, 8], f16)), {})`

Below we'll go into detail about the format for rigor.
## Format

BackendBench stores function arguments as strings with all parameters needed to reproduce PyTorch operations:

```python
((arg1, arg2, ...), {'key1': val1, 'key2': val2})
```

```python
(([T([5, 5], f32), T([3, 3], i64), 42],), {'weight': T([3, 3], f32)})
```

## Tensor Representation

Tensors use the format `T([shape], dtype)` or `T([shape], dtype, [stride])`:

```python
T([10, 20], f32)           # 10×20 float32 tensor
T([1, 512, 768], f16)      # 1×512×768 float16 tensor
T([64], i32)               # 64-element int32 vector
```

**Data types**: `f16/f32/f64` (float), `bf16` (bfloat16), `i32/i64` (int), `b8` (bool)

## Examples

**Single tensor argument:**

```python
((T([48, 24, 28, 28], f16),), {})
```

48×24×28×28 float16 tensor, no keyword arguments

**Multiple tensors:**

```python
((T([8, 8, 8, 8, 8], f16), T([8, 8, 8, 8, 8], f16)), {})
```

Two 5D tensors of identical shapes

**Mixed arguments:**

```python
((T([128, 256], f16), [1024, 249, 249]), {'dtype': torch.float16, 'device': 'cuda'})
```

Args are a tensor, list, and keyword arguments

**Complex nested:**

```python
(([T([5, 5], f32), T([3, 3], i64), 42],), {'weight': T([3, 3], f32)})
```

List containing tensors and numbers, plus tensor keyword argument

## Argument Types

- **Tensors**: `T([shape], dtype)`
    
- **Lists**: `[item1, item2, ...]` (can contain tensors)
    
- **Primitives**: `42`, `'hello'`, `True`, `None`
    
- **PyTorch objects**: `torch.float16`, `torch.strided`
    

# Trace Files in BackendBench

This repository includes `.txt` trace files, which were the original output format of model traces and are used to compose the dataset. Here’s their structure:

## Format

Trace files capture PyTorch operations and arguments from real model executions:

```
Operator: operation_name
cnt: count, serialized_arguments
cnt: count, serialized_arguments
...
```

## Structure

**Operator line**: Specifies the PyTorch operation

```
Operator: aten.add.Tensor
Operator: aten.relu.default
Operator: aten.linear.default
```

**Count lines**: Show how often each argument combination was used

```
cnt: 42, ((T([10, 20], f16), T([10, 20], f16)), {})
cnt: 0, ((T([5, 5], f32), T([5, 5], f32)), {})
```

## Reading Count Lines

- **Count `42`**: Argument combination appeared 42 times in traced models
    
- **`cnt: 0`** = Synthetic/generated arguments (not from real models)
    
- **`cnt: >0`** = Real usage frequency from model traces
    

**Arguments**: Same format as serialized arguments – `((args), {kwargs})`

## Example

```
Operator: aten.add.Tensor
cnt: 156, ((T([1, 512, 768], f16), T([1, 512, 768], f16)), {})
cnt: 89, ((T([32, 128], f32), T([32, 128], f32)), {})
cnt: 0, ((T([10, 10], f16), T([10, 10], f16)), {})

Operator: aten.relu.default  
cnt: 234, ((T([64, 256], f16),), {})
```

This shows:

- `aten.add.Tensor` called 156 times with 1×512×768 tensors
    
- Same operation called 89 times with 32×128 tensors
    
- One synthetic test case (`cnt: 0`)
    
- `aten.relu.default` called 234 times with a 64×256 tensor
    

**Note: Traces may be deprecated in the future, but are described here as they are currently included in the dataset/codebase.**

# Acknowledgements

We are extremely grateful to the [TritonBench](https://github.com/pytorch-labs/tritonbench/tree/main) team for these traces and their intuitive format.