Improve model card: Add pipeline tag, library name, abstract, and additional tags
Browse filesThis PR updates the model card for the `SR-Scientist` model by:
- Adding `pipeline_tag: text-generation` to improve discoverability on the Hub for models capable of generating text.
- Including `library_name: transformers`, as evidenced by the model's configuration files, ensuring automated code snippets are correctly enabled on the Hub.
- Incorporating the paper's abstract to provide a more comprehensive overview of the model's capabilities and research context directly within the model card.
- Adding additional tags (`code-generation`, `agentic-ai`, `scientific-discovery`) to further enhance discoverability.
- The existing arXiv paper link and GitHub repository link have been retained.
- A sample usage section has not been added as no suitable, directly copyable Python snippet was found in the official GitHub README for simple model loading and inference.
@@ -1,13 +1,23 @@
|
|
1 |
---
|
2 |
-
license: apache-2.0
|
3 |
-
datasets:
|
4 |
-
- GAIR/SR-Scientist
|
5 |
base_model:
|
6 |
- Qwen/Qwen3-Coder-30B-A3B-Instruct
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
7 |
---
|
|
|
8 |
## SR-Scientist: Scientific Equation Discovery With Agentic AI
|
9 |
|
10 |
-
This is the checkpoint from the RL training, which used Qwen3-Coder-30B-A3B-Instruct as a backbone in the paper ['SR-Scientist: Scientific Equation Discovery With Agentic AI'](https://arxiv.org/abs/2510.11661). For usage, please refer to the [
|
|
|
|
|
|
|
11 |
|
12 |
### 🖋️ Citation
|
13 |
|
|
|
1 |
---
|
|
|
|
|
|
|
2 |
base_model:
|
3 |
- Qwen/Qwen3-Coder-30B-A3B-Instruct
|
4 |
+
datasets:
|
5 |
+
- GAIR/SR-Scientist
|
6 |
+
license: apache-2.0
|
7 |
+
pipeline_tag: text-generation
|
8 |
+
library_name: transformers
|
9 |
+
tags:
|
10 |
+
- code-generation
|
11 |
+
- agentic-ai
|
12 |
+
- scientific-discovery
|
13 |
---
|
14 |
+
|
15 |
## SR-Scientist: Scientific Equation Discovery With Agentic AI
|
16 |
|
17 |
+
This is the checkpoint from the RL training, which used Qwen3-Coder-30B-A3B-Instruct as a backbone in the paper ['SR-Scientist: Scientific Equation Discovery With Agentic AI'](https://arxiv.org/abs/2510.11661). For comprehensive usage instructions, including inference and RL training frameworks, please refer to the [official GitHub repository](https://github.com/GAIR-NLP/SR-Scientist).
|
18 |
+
|
19 |
+
### Paper Abstract
|
20 |
+
Recently, Large Language Models (LLMs) have been applied to scientific equation discovery, leveraging their embedded scientific knowledge for hypothesis generation. However, current methods typically confine LLMs to the role of an equation proposer within search algorithms like genetic programming. In this paper, we present SR-Scientist, a framework that elevates the LLM from a simple equation proposer to an autonomous AI scientist that writes code to analyze data, implements the equation as code, submits it for evaluation, and optimizes the equation based on experimental feedback. Specifically, we wrap the code interpreter into a set of tools for data analysis and equation evaluation. The agent is instructed to optimize the equation by utilizing these tools over a long horizon with minimal human-defined pipelines. Empirical results show that SR-Scientist outperforms baseline methods by an absolute margin of 6% to 35% on datasets covering four science disciplines. Additionally, we demonstrate our method's robustness to noise, the generalization of the discovered equations to out-of-domain data, and their symbolic accuracy. Furthermore, we develop an end-to-end reinforcement learning framework to enhance the agent's capabilities.
|
21 |
|
22 |
### 🖋️ Citation
|
23 |
|