Datasets:

Tasks:
Other
Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
License:
File size: 9,061 Bytes
119392f
3fee35a
 
 
 
119392f
 
 
 
 
 
 
eb99996
119392f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
eb99996
 
 
 
 
 
 
 
119392f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6aecac9
 
119392f
6071c7f
6aecac9
 
 
 
119392f
 
 
6071c7f
 
119392f
1c7c7e7
60e384b
1c7c7e7
37766d5
60e384b
f2754cb
60e384b
 
 
1c7c7e7
4ece23b
 
 
 
 
 
1c7c7e7
 
 
 
f2754cb
1c7c7e7
 
 
60e384b
1c7c7e7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
60e384b
4bcfdc0
 
 
 
 
 
 
 
 
1c7c7e7
 
60e384b
 
 
 
 
 
 
 
 
 
 
 
 
3fee35a
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
---
license: cc-by-4.0
task_categories:
- other
library_name: datasets
dataset_info:
  features:
  - name: instance_id
    dtype: string
  - name: base_commit
    dtype: string
  - name: created_at
    dtype: string
  - name: environment_setup_commit
    dtype: string
  - name: hints_text
    dtype: string
  - name: patch
    dtype: string
  - name: problem_statement
    dtype: string
  - name: repo
    dtype: string
  - name: test_patch
    dtype: string
  - name: meta
    struct:
    - name: commit_name
      dtype: string
    - name: failed_lite_validators
      sequence: string
    - name: has_test_patch
      dtype: bool
    - name: is_lite
      dtype: bool
    - name: llm_score
      struct:
      - name: difficulty_score
        dtype: int64
      - name: issue_text_score
        dtype: int64
      - name: test_score
        dtype: int64
    - name: num_modified_files
      dtype: int64
  - name: version
    dtype: string
  - name: install_config
    struct:
    - name: env_vars
      struct:
      - name: JUPYTER_PLATFORM_DIRS
        dtype: string
    - name: env_yml_path
      sequence: string
    - name: install
      dtype: string
    - name: log_parser
      dtype: string
    - name: no_use_env
      dtype: bool
    - name: packages
      dtype: string
    - name: pip_packages
      sequence: string
    - name: pre_install
      sequence: string
    - name: python
      dtype: string
    - name: reqs_path
      sequence: string
    - name: test_cmd
      dtype: string
  - name: requirements
    dtype: string
  - name: environment
    dtype: string
  - name: FAIL_TO_PASS
    sequence: string
  - name: FAIL_TO_FAIL
    sequence: string
  - name: PASS_TO_PASS
    sequence: string
  - name: PASS_TO_FAIL
    sequence: string
  - name: license_name
    dtype: string
  - name: __index_level_0__
    dtype: int64
  splits:
  - name: test
    num_bytes: 737537372
    num_examples: 21336
  download_size: 239735457
  dataset_size: 737537372
configs:
- config_name: default
  data_files:
  - split: test
    path: data/test-*
---

# Dataset Summary

SWE-rebench is a large-scale dataset designed to support training and evaluation of LLM-based software engineering (SWE) agents, building upon and expanding our earlier release, [SWE-bench-extra](https://huggingface.co/datasets/nebius/SWE-bench-extra). It is constructed using a fully automated pipeline that continuously extracts real-world interactive SWE tasks from GitHub repositories at scale, as detailed in our paper [SWE-rebench: An Automated Pipeline for Task Collection and Decontaminated Evaluation of Software Engineering Agents](https://arxiv.org/abs/2505.20411). The dataset currently comprises over 21,000 issue–pull request pairs from 3,400+ Python repositories, each validated for correctness through automated environment setup and test execution. A curated subset of these tasks also forms the basis of our continuously updated [SWE-rebench leaderboard](https://swe-rebench.com/leaderboard).
SWE-rebench builds upon and extends the methodology of [SWE-bench](https://www.swebench.com/) by incorporating several key enhancements detailed in our paper, including:

* A fully automated pipeline for continuous task collection.
* LLM-driven extraction and validation of environment installation instructions.
* An automated LLM-based task quality assessment pipeline that annotates tasks with labels such as clarity, complexity, or test patch validity.

We’ve released 7,500 pre-built Docker images used in our RL pipeline. They’re publicly available on [Docker Hub](https://hub.docker.com/repositories/swerebench). You do not need to build them yourself. 

# News

[2025/08/05] Uploaded the corresponding Docker images for 7,500 tasks to Docker Hub.

# How to Use

```python
from datasets import load_dataset
ds = load_dataset('nebius/SWE-rebench')
```

# Dataset Structure
The SWE-rebench dataset schema extends the original SWE-bench schema with additional fields to support richer analysis. The complete schema is detailed in the table below. For more information about this data and methodology behind collecting it, please refer to our paper.

| Field name                 | Type   | Description                                                                                     |
|----------------------------|--------|-------------------------------------------------------------------------------------------------|
| `instance_id`              | str    | A formatted instance identifier, usually as `repo_owner__repo_name-PR-number`.                 |
| `patch`                    | str    | The gold patch, the patch generated by the PR (minus test-related code), that resolved the issue. |
| `repo`                     | str    | The repository owner/name identifier from GitHub.                                              |
| `base_commit`              | str    | The commit hash of the repository representing the HEAD of the repository before the solution PR is applied. |
| `hints_text`               | str    | Comments made on the issue prior to the creation of the solution PR’s first commit creation date. |
| `created_at`               | str    | The creation date of the pull request.                                                         |
| `test_patch`               | str    | A test-file patch that was contributed by the solution PR.                                     |
| `problem_statement`        | str    | The issue title and body.                                                                      |
| `version`                  | str    | Installation version to use for running evaluation.                                            |
| `environment_setup_commit` | str    | Commit hash to use for environment setup and installation.                                     |
| `FAIL_TO_PASS`             | str    | A JSON list of strings that represent the set of tests resolved by the PR and tied to the issue resolution. |
| `PASS_TO_PASS`             | str    | A JSON list of strings that represent tests that should pass before and after the PR application. |
| `meta`                     | str    | A JSON dictionary indicating whether the instance is lite, along with a list of failed lite validators if it is not. |
| `license_name`                  | str    | The type of license of the repository. |
| `install_config`           | str    | Installation configuration for setting up the repository.                                |
| `requirements`             | str    | Freezed requirements for the repository.                                                       |
| `environment`              | str    | Environment configuration for the repository.                                                  |

To execute tasks from SWE-rebench (i.e., set up their environments, apply patches, and run tests), we provide a [fork](https://github.com/SWE-rebench/SWE-bench-fork) of the original SWE-bench execution framework, adapted for our dataset's structure and features.
Our fork is based on the SWE-bench framework, specifically from its `Release 4.0.3`. The primary modification introduces functionality to source environment installation constants directly from the `install_config` field present in each task instance within SWE-rebench. This allows for more flexible and task-specific environment setups.

You can find the details of this modification in the 
[following commit:](https://github.com/SWE-rebench/SWE-bench-fork/commit/980d0cca8aa4e73f1d9f894e906370bef8c4de8a)

To build the necessary Docker images and run agents on SWE-rebench tasks, you have two main options:

1.  **Use our SWE-bench fork directly:** Clone the fork and utilize its scripts for building images and executing tasks. The framework will automatically use the `install_config` from each task.
2.  **Integrate similar functionality into your existing codebase:** If you have your own execution framework based on SWE-bench or a different system, you can adapt it by implementing a similar mechanism to parse and utilize the `install_config` field from the SWE-rebench task instances. The aforementioned commit can serve as a reference for this integration.

# License
The dataset is licensed under the Creative Commons Attribution 4.0 license. However, please respect the license of each specific repository on which a particular instance is based. To facilitate this, the license of each repository at the time of the commit is provided for every instance.

# Citation

```bibtex
@misc{badertdinov2025swerebenchautomatedpipelinetask,
      title={SWE-rebench: An Automated Pipeline for Task Collection and Decontaminated Evaluation of Software Engineering Agents}, 
      author={Ibragim Badertdinov and Alexander Golubev and Maksim Nekrashevich and Anton Shevtsov and Simon Karasik and Andrei Andriushchenko and Maria Trofimova and Daria Litvintseva and Boris Yangel},
      year={2025},
      eprint={2505.20411},
      archivePrefix={arXiv},
      primaryClass={cs.SE},
      url={https://arxiv.org/abs/2505.20411}
}