|
--- |
|
license: apache-2.0 |
|
task_categories: |
|
- question-answering |
|
language: |
|
- ru |
|
pretty_name: T-math |
|
size_categories: |
|
- n<1K |
|
dataset_info: |
|
features: |
|
- name: question |
|
dtype: string |
|
- name: verifiable_answer |
|
dtype: string |
|
- name: year |
|
dtype: string |
|
- name: grade |
|
dtype: string |
|
- name: full_answer |
|
dtype: string |
|
- name: solutions |
|
list: string |
|
- name: task_complexity |
|
dtype: string |
|
- name: olympiad |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 510955 |
|
num_examples: 331 |
|
download_size: 228445 |
|
dataset_size: 510955 |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: data/train-* |
|
--- |
|
|
|
|
|
# 🧮 T-Math |
|
**T-Math** is a dataset of Russian math olympiad problems created to assess the reasoning capabilities of large language models (LLMs) in mathematics. |
|
It includes 331 problems from the [All-Russian School Olympiad](https://vos.olimpiada.ru/) and the [Moscow Olympiad](https://mos.olimpiada.ru) for high school students, covering the period from 1998 to 2025. |
|
The tasks and their ground-truth answers were extracted automatically and subsequently verified by human assessors. |
|
|
|
Key features: |
|
- Challenging problems that require multi-step reasoning (median completion length for Qwen3-32B is 16K tokens), sourced from top-tier Russian olympiads |
|
- Easily verifiable: answers are numeric-only and checked using the `math_verify` library to compare mathematical expressions |
|
- Not yet saturated, even by frontier reasoning models such as Gemini 2.5 Pro and DeepSeek R1 |
|
- Contains 331 samples — the largest Russian math olympiad-level benchmark — making it more statistically robust compared to smaller datasets like the 30-sample AIME benchmark |
|
|
|
|
|
## 📊 Evaluation Results |
|
|
|
|Model|pass@1| |
|
|--|--| |
|
|o4-mini-high|**0.73**| |
|
|DeepSeek-R1-0528|<ins>0.71</ins>| |
|
|Gemini-2.5-Pro|0.70| |
|
|Claude Sonnet 4|0.56| |
|
|T-pro-it-2.0|0.54| |
|
|Qwen3-32B|0.53| |
|
|
|
|
|
## 🗂️ Filtering procedure |
|
|
|
The text was extracted from PDFs using [Qwen/Qwen2.5-VL-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-72B-Instruct). Tasks, along with their ground-truth and verifiable (numeric) answers, were extracted via LLM calls. |
|
We filtered out invalid questions using an LLM based on the following criteria: |
|
- Tasks requiring multiple answers |
|
- Tasks without a single correct answer |
|
- Theorem-like tasks where the main goal is proving a statement, making automatic verification non-trivial |
|
- Tasks with non-numeric answers, to simplify answer comparison |
|
- Tasks that cannot be solved without access to an accompanying image |
|
|
|
Next, we removed tasks of moderate difficulty where Qwen3-8B achieved a 100% pass@16 rate, as they offer limited value for benchmarking reasoning. |
|
Finally, both the questions and the verifiable answers were manually reviewed by assessors to ensure consistency with the original sources. |
|
|
|
|
|
## 🛠️ How to use |
|
Add the following system prompt to guide the model to return the final answer in a \boxed{} tag, making it easier to parse: |
|
``` |
|
Решите следующую математическую задачу эффективно и ясно. Последняя строка вашего ответа должна иметь следующий формат: |
|
'Таким образом, окончательный ответ: $\boxed{ОТВЕТ}$.' (без кавычек), где ОТВЕТ - это просто окончательное число или выражение, решающее задачу. |
|
Думайте шаг за шагом перед ответом. |
|
``` |
|
|
|
You can then use the following code snippet with the math_verify library to compare mathematical expressions: |
|
```python |
|
from math_verify import LatexExtractionConfig, parse, verify |
|
from latex2sympy2_extended import NormalizationConfig |
|
|
|
|
|
def accuracy_reward(completion: str, solution: str) -> float: |
|
"""Reward function that checks if the completion matches the ground truth.""" |
|
# parse the gold solution (assumed to always succeed) |
|
gold_parsed = parse(solution, extraction_mode="first_match") |
|
|
|
# parse the model’s completion with the same LaTeX extraction settings |
|
answer_parsed = parse( |
|
completion, |
|
extraction_config=[ |
|
LatexExtractionConfig( |
|
normalization_config=NormalizationConfig( |
|
nits=False, |
|
malformed_operators=False, |
|
basic_latex=True, |
|
equations=True, |
|
boxed="all", |
|
units=True, |
|
) |
|
) |
|
], |
|
extraction_mode="first_match", |
|
) |
|
|
|
# verify and return binary reward; on error, print and give 0.0 |
|
try: |
|
return float(verify(gold_parsed, answer_parsed)) |
|
except Exception as e: |
|
print(f"verify failed: {e}, answer: {answer_parsed}, gold: {gold_parsed}") |
|
return 0.0 |
|
|
|
|
|
``` |
|
|