Spaces:
Running
Running
File size: 2,833 Bytes
bb566ac 4dac3ae a51e9ab 4dac3ae ef80e4d 4dac3ae 782c8f6 4dac3ae ef80e4d 2c1a095 05ba02f 501a263 ef80e4d 05ba02f ef80e4d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 |
---
title: README
emoji: π
colorFrom: yellow
colorTo: indigo
sdk: static
pinned: false
---
# **About US**
Satori (ζγ) is a Japanese term meaning "sudden enlightenment" or "awakening." The Satori team is dedicated to the pursuit of Artificial General Intelligence (AGI), with a particular focus on enhancing the reasoning capabilities of large language models (LLMs)βa crucial step toward this ultimate goal.
Along this journey, the Satori team has released two major research contributions:
- **Satori (ICML 2025)**: Released concurrently with DeepSeek-R1, we propose a novel post-training paradigm that enables LLMs to performs an extended reasoning process with self-reflection: 1) a small-scale format tuning (FT) stage to internalize certain reasoning format and 2) a large-scale self-improvement
stage leveraging reinforcement learning (RL). Our approach results in Satori, a 7B LLM that achieves state-of-the-art reasoning performance.
- **Satori-SWE**: This work addresses a particularly challenging domain for LLMs: real-world software engineering (SWE) task. We propose Evolutionary Test-Time Scaling (EvoScale) that treats LLM generation as an evolutionary process. By combining reinforcement learning (RL) training and EvoScale test-time scaling, our 32B model, Satori-SWE-32B, achieves performance comparable to models exceeding 100B parameters, while requiring only a small number of samples.
# **Resources**
If you are interested in our work, please refer to our blog and research paper for more technical details!
- [Blog](https://satori-reasoning.github.io/blog/)
- [Satori](https://arxiv.org/pdf/2502.02508)
- [Satori-SWE](https://satori-reasoning.github.io)
# **Citation**
If you find our model and data helpful, please cite our paper:
## Satori
```bibtex
@misc{shen2025satorireinforcementlearningchainofactionthought,
title={Satori: Reinforcement Learning with Chain-of-Action-Thought Enhances LLM Reasoning via Autoregressive Search},
author={Maohao Shen and Guangtao Zeng and Zhenting Qi and Zhang-Wei Hong and Zhenfang Chen and Wei Lu and Gregory Wornell and Subhro Das and David Cox and Chuang Gan},
year={2025},
eprint={2502.02508},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2502.02508},
}
```
## Satori-SWE
```bibtex
@misc{zeng2025satorisweevolutionarytesttimescaling,
title={Satori-SWE: Evolutionary Test-Time Scaling for Sample-Efficient Software Engineering},
author={Guangtao Zeng and Maohao Shen and Delin Chen and Zhenting Qi and Subhro Das and Dan Gutfreund and David Cox and Gregory Wornell and Wei Lu and Zhang-Wei Hong and Chuang Gan},
year={2025},
eprint={2505.23604},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2505.23604},
}
``` |