# The paper's URL for linking PAPER_URL = "https://arxiv.org/abs/2507.13337" WHAT_IS_F1_HTML_TOP = f"""
Frontier AI models have recently demonstrated strong performance on mathematical and algorithmic benchmarks, including earning gold medals in olympiads, and attaining top percentile ratings in competitive programming contests. How well do such benchmarks capture the true depth of algorithmic reasoning, as it arises in real-world research problems?
We believe that existing benchmarks fail to capture the deep reasoning skills required for complex, research-level algorithmic problems. To address this gap, we introduce FormulaOne.
FormulaOne consists of 220 novel dynamic programming problems over graphs. The problems are organised into three categories, ranging from moderate difficulty and all the way up to research-level.
The latter category is incredibly demanding, requiring resolution of many points of uncertainty, and involving an array of reasoning steps, including topological and geometric insight, knowledge of mathematical domains such as extremal graph theory and logic, combinatorial considerations, precise implementation, and more.
Despite impressive performance on existing benchmarks, presently no model solves even a single 'Deepest Tier' problem.
While the problems are often natural to state, their solutions are far from obvious. The solvability of this vast class of problems is guaranteed by an algorithmic meta-theorem due to Courcelle, which broadly states:
“For every sufficiently tree-like graph, any problem definable in an expressive formal logic — Monadic Second-Order (MSO) logic — can be solved by a dynamic programming algorithm that operates in time linear in the order of the graph.”
The key is to use a structure known as a tree decomposition, which organises the graph's vertices into a series of overlapping sets, or “bags”, that are themselves arranged in a tree.
An algorithm can then traverse this tree of bags, solving the problem piece by piece using dynamic programming. This process involves designing a “state” that summarises all necessary information about the partial solution within a bag, and then defining how this state transforms as vertices are introduced, forgotten, or bags are merged.
""" # Text immediately after the video; opens Evaluation section header/content (up to before Warmup figure) WHAT_IS_F1_HTML_AFTER_VIDEO = """The deceptive simplicity of the problem statements belies the extraordinary difficulty of discovering the correct dynamic programming solution. This process is riddled with subtle combinatorial and logical pitfalls, demanding a profound understanding of the problem’s underlying structure. For a detailed walkthrough of the fifteen interdependent reasoning steps required to solve a single hard problem — Maximal-Cluster-Graph
— see the appendix of our paper.
All models were evaluated using their highest available reasoning settings and with the maximum context length permitted. To give models the best possible chance of success, we provide a generous few-shot prompt that covers a broad array of the ideas and techniques involved in solving these problems.
Each submitted solution is subjected to a rigorous and automated test suite that measures three key aspects of its validity:
To support research and encourage community contributions, the FormulaOne-Shallow
("warmup") dataset is released as a public resource for training and fine-tuning models. The complete test suite for all 100 'Shallow' problems is available, alongside a standalone evaluation environment, in our GitHub repository.
To maintain the integrity of the core benchmark, only a minimal subset of tests is released for the Deeper and Deepest Tier problems. Solutions submitted for evaluation on our benchmark are evaluated against a withheld comprehensive test-suite.
""" # Evaluation: begins the "Model Accuracy" subsection and the Warmup paragraph, up to (but not including) the Warmup figure. WHAT_IS_F1_HTML_EVAL_BEFORE_WARMUPFIG = """On the FormulaOne-Shallow problems, frontier models perform reasonably well. This confirms they have a foundational capability for these types of algorithmic tasks, in other words, the tasks are squarely in-distribution.
However, as the reasoning depth increases in the Deeper tier, and solutions require the discovery and integration of novel and more complex state representations, model performance drops off sharply.
""" # Tail after Deeper figure (closes evaluation section + container) WHAT_IS_F1_HTML_AFTER_TIER1FIG_TAIL = """This trend culminates in Deepest Tier, where the difficulty is characteristic of exploratory research problems. On this set of 20 problems, no current frontier model solves even a single one. This result starkly illustrates the gap that remains between high performance on existing benchmarks and the deep algorithmic reasoning required for truly complex problems.