# The paper's URL for linking PAPER_URL = "https://arxiv.org/abs/2507.13337" WHAT_IS_F1_HTML_TOP = f"""

FormulaOne

by AAI


Frontier AI models have recently demonstrated strong performance on mathematical and algorithmic benchmarks, including earning gold medals in olympiads, and attaining top percentile ratings in competitive programming contests. How well do such benchmarks capture the true depth of algorithmic reasoning, as it arises in real-world research problems?

We believe that existing benchmarks fail to capture the deep reasoning skills required for complex, research-level algorithmic problems. To address this gap, we introduce FormulaOne.

FormulaOne consists of 220 novel dynamic programming problems over graphs. The problems are organised into three categories, ranging from moderate difficulty and all the way up to research-level.

Category
Size
Description
Shallow
100
A set of “easier” problems.
Deeper
100
A set of challenging problems.
Deepest
20
A set of highly challenging problems.
""" # Bottom is split so we can insert real Gradio media (images/video) from app.py. # Up to (and including) the "An Infinite Well" heading — tabs are inserted immediately after WHAT_IS_F1_HTML_BOTTOM_A_BEFORE_TABS = """

The latter category is incredibly demanding, requiring resolution of many points of uncertainty, and involving an array of reasoning steps, including topological and geometric insight, knowledge of mathematical domains such as extremal graph theory and logic, combinatorial considerations, precise implementation, and more.

Despite impressive performance on existing benchmarks, presently no model solves even a single 'Deepest Tier' problem.

An “Infinite Well” of Problems

""" # After the heading (and after the tabbed examples), before the first figure WHAT_IS_F1_HTML_BOTTOM_A_AFTER_TABS = """

While the problems are often natural to state, their solutions are far from obvious. The solvability of this vast class of problems is guaranteed by an algorithmic meta-theorem due to Courcelle, which broadly states:

“For every sufficiently tree-like graph, any problem definable in an expressive formal logic — Monadic Second-Order (MSO) logic — can be solved by a dynamic programming algorithm that operates in time linear in the order of the graph.”

The key is to use a structure known as a tree decomposition, which organises the graph's vertices into a series of overlapping sets, or “bags”, that are themselves arranged in a tree.

An algorithm can then traverse this tree of bags, solving the problem piece by piece using dynamic programming. This process involves designing a “state” that summarises all necessary information about the partial solution within a bag, and then defining how this state transforms as vertices are introduced, forgotten, or bags are merged.

""" # Text immediately after the video; opens Evaluation section header/content (up to before Warmup figure) WHAT_IS_F1_HTML_AFTER_VIDEO = """

The deceptive simplicity of the problem statements belies the extraordinary difficulty of discovering the correct dynamic programming solution. This process is riddled with subtle combinatorial and logical pitfalls, demanding a profound understanding of the problem’s underlying structure. For a detailed walkthrough of the fifteen interdependent reasoning steps required to solve a single hard problem — Maximal-Cluster-Graphsee the appendix of our paper.

Evaluation

All models were evaluated using their highest available reasoning settings and with the maximum context length permitted. To give models the best possible chance of success, we provide a generous few-shot prompt that covers a broad array of the ideas and techniques involved in solving these problems.

Each submitted solution is subjected to a rigorous and automated test suite that measures three key aspects of its validity:

To support research and encourage community contributions, the FormulaOne-Shallow ("warmup") dataset is released as a public resource for training and fine-tuning models. The complete test suite for all 100 'Shallow' problems is available, alongside a standalone evaluation environment, in our GitHub repository.

To maintain the integrity of the core benchmark, only a minimal subset of tests is released for the Deeper and Deepest Tier problems. Solutions submitted for evaluation on our benchmark are evaluated against a withheld comprehensive test-suite.

""" # Evaluation: begins the "Model Accuracy" subsection and the Warmup paragraph, up to (but not including) the Warmup figure. WHAT_IS_F1_HTML_EVAL_BEFORE_WARMUPFIG = """

Model Accuracy

On the FormulaOne-Shallow problems, frontier models perform reasonably well. This confirms they have a foundational capability for these types of algorithmic tasks, in other words, the tasks are squarely in-distribution.

However, as the reasoning depth increases in the Deeper tier, and solutions require the discovery and integration of novel and more complex state representations, model performance drops off sharply.

""" # Tail after Deeper figure (closes evaluation section + container) WHAT_IS_F1_HTML_AFTER_TIER1FIG_TAIL = """

This trend culminates in Deepest Tier, where the difficulty is characteristic of exploratory research problems. On this set of 20 problems, no current frontier model solves even a single one. This result starkly illustrates the gap that remains between high performance on existing benchmarks and the deep algorithmic reasoning required for truly complex problems.

""" SUBMISSION_TERMS_TEXT = """ ### Competition terms - By submitting, you agree to the **FormulaOne Submission Agreement (v1.2)** and our **Privacy Notice**. - Your uploaded file remains yours; we only use it to evaluate, score, and contact you about your result. **Licensing for the public benchmark assets (informational)** - **Evaluator code:** Apache License 2.0 - **Problem statements & public tests:** Creative Commons **CC BY 4.0** See the project's **README licence section** and full texts: `LICENSE- APACHE2`, `LICENSE-CC-BY` in our GitHub repo. **Platform** - Your use of Hugging Face is also governed by Hugging Face's Terms and Privacy Policy. """ EVALUATION_QUEUE_TEXT = """ ## Submitting to the FormulaOne Leaderboard This leaderboard evaluates systems on the FormulaOne core dataset. Submissions consist of a .jsonl file with solution code for each problem. ### 📁 I. Format Your Submission File Your submission must be a .jsonl file with one entry per problem: ```json {"problem_id": "1", "solution": ""} {"problem_id": "2", "solution": ""} ... ``` - problem_id: Must match the official list of FormulaOne core problems. - solution: A Python code implementing the required callback functions. 📄 Full list of problem_ids: View the [FormulaOne core dataset](https://github.com/double-ai/formulaone-dataset-release/tree/main/dataset/formulaone) for the complete list of problem IDs. ⚠️ Validation Rules: Submissions must: - Contain exactly two columns: ["problem_id", "solution"] - Include all required problems (no missing/unknown IDs) - Provide solutions as Python strings - Avoid duplicates ### 📤 II. Submit via the UI below - Upload your `.jsonl` file. - Fill in the following fields: - **System Name** - **Organization** - **System Type** - Click **Submit**. ### ⏱️ After Submission Submissions are validated and evaluated within ~24 hours. Results will appear on the leaderboard once processed. """ CITATION_BUTTON_LABEL = """📚 How to cite FormulaOne""" CITATION_BUTTON_TEXT = r""" @misc{beniamini2025formulaonemeasuringdepthalgorithmic, title={FormulaOne: Measuring the Depth of Algorithmic Reasoning Beyond Competitive Programming}, author={Gal Beniamini and Yuval Dor and Alon Vinnikov and Shir Granot Peled and Or Weinstein and Or Sharir and Noam Wies and Tomer Nussbaum and Nadav Schweiger and Ido Ben Shaul and Tomer Zekharya and Yoav Levine and Shai Shalev-Shwartz and Amnon Shashua}, year={2025}, eprint={2507.13337}, archivePrefix={arXiv}, primaryClass={cs.AI}, url={https://arxiv.org/abs/2507.13337}, } """