File size: 4,884 Bytes
f608e6d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 |
---
license: apache-2.0
task_categories:
- question-answering
language:
- ru
pretty_name: T-math
size_categories:
- n<1K
dataset_info:
features:
- name: question
dtype: string
- name: verifiable_answer
dtype: string
- name: year
dtype: string
- name: grade
dtype: string
- name: full_answer
dtype: string
- name: solutions
list: string
- name: task_complexity
dtype: string
- name: olympiad
dtype: string
splits:
- name: train
num_bytes: 510955
num_examples: 331
download_size: 228445
dataset_size: 510955
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# 🧮 T-Math
**T-Math** is a dataset of Russian math olympiad problems created to assess the reasoning capabilities of large language models (LLMs) in mathematics.
It includes 331 problems from the [All-Russian School Olympiad](https://vos.olimpiada.ru/) and the [Moscow Olympiad](https://mos.olimpiada.ru) for high school students, covering the period from 1998 to 2025.
The tasks and their ground-truth answers were extracted automatically and subsequently verified by human assessors.
Key features:
- Challenging problems that require multi-step reasoning (median completion length for Qwen3-32B is 16K tokens), sourced from top-tier Russian olympiads
- Easily verifiable: answers are numeric-only and checked using the `math_verify` library to compare mathematical expressions
- Not yet saturated, even by frontier reasoning models such as Gemini 2.5 Pro and DeepSeek R1
- Contains 331 samples — the largest Russian math olympiad-level benchmark — making it more statistically robust compared to smaller datasets like the 30-sample AIME benchmark
## 📊 Evaluation Results
|Model|pass@1|
|--|--|
|o4-mini-high|**0.73**|
|DeepSeek-R1-0528|<ins>0.71</ins>|
|Gemini-2.5-Pro|0.70|
|Claude Sonnet 4|0.56|
|T-pro-it-2.0|0.54|
|Qwen3-32B|0.53|
## 🗂️ Filtering procedure
The text was extracted from PDFs using [Qwen/Qwen2.5-VL-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-72B-Instruct). Tasks, along with their ground-truth and verifiable (numeric) answers, were extracted via LLM calls.
We filtered out invalid questions using an LLM based on the following criteria:
- Tasks requiring multiple answers
- Tasks without a single correct answer
- Theorem-like tasks where the main goal is proving a statement, making automatic verification non-trivial
- Tasks with non-numeric answers, to simplify answer comparison
- Tasks that cannot be solved without access to an accompanying image
Next, we removed tasks of moderate difficulty where Qwen3-8B achieved a 100% pass@16 rate, as they offer limited value for benchmarking reasoning.
Finally, both the questions and the verifiable answers were manually reviewed by assessors to ensure consistency with the original sources.
## 🛠️ How to use
Add the following system prompt to guide the model to return the final answer in a \boxed{} tag, making it easier to parse:
```
Решите следующую математическую задачу эффективно и ясно. Последняя строка вашего ответа должна иметь следующий формат:
'Таким образом, окончательный ответ: $\boxed{ОТВЕТ}$.' (без кавычек), где ОТВЕТ - это просто окончательное число или выражение, решающее задачу.
Думайте шаг за шагом перед ответом.
```
You can then use the following code snippet with the math_verify library to compare mathematical expressions:
```python
from math_verify import LatexExtractionConfig, parse, verify
from latex2sympy2_extended import NormalizationConfig
def accuracy_reward(completion: str, solution: str) -> float:
"""Reward function that checks if the completion matches the ground truth."""
# parse the gold solution (assumed to always succeed)
gold_parsed = parse(solution, extraction_mode="first_match")
# parse the model’s completion with the same LaTeX extraction settings
answer_parsed = parse(
completion,
extraction_config=[
LatexExtractionConfig(
normalization_config=NormalizationConfig(
nits=False,
malformed_operators=False,
basic_latex=True,
equations=True,
boxed="all",
units=True,
)
)
],
extraction_mode="first_match",
)
# verify and return binary reward; on error, print and give 0.0
try:
return float(verify(gold_parsed, answer_parsed))
except Exception as e:
print(f"verify failed: {e}, answer: {answer_parsed}, gold: {gold_parsed}")
return 0.0
```
|