Datasets:

Modalities:
Text
Formats:
parquet
Languages:
Russian
Size:
< 1K
Libraries:
Datasets
pandas
License:
taranetsdan commited on
Commit
f608e6d
·
verified ·
1 Parent(s): d4c81d2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +125 -32
README.md CHANGED
@@ -1,32 +1,125 @@
1
- ---
2
- license: apache-2.0
3
- dataset_info:
4
- features:
5
- - name: question
6
- dtype: string
7
- - name: verifiable_answer
8
- dtype: string
9
- - name: year
10
- dtype: string
11
- - name: grade
12
- dtype: string
13
- - name: full_answer
14
- dtype: string
15
- - name: solutions
16
- list: string
17
- - name: task_complexity
18
- dtype: string
19
- - name: olympiad
20
- dtype: string
21
- splits:
22
- - name: train
23
- num_bytes: 510955
24
- num_examples: 331
25
- download_size: 228445
26
- dataset_size: 510955
27
- configs:
28
- - config_name: default
29
- data_files:
30
- - split: train
31
- path: data/train-*
32
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - question-answering
5
+ language:
6
+ - ru
7
+ pretty_name: T-math
8
+ size_categories:
9
+ - n<1K
10
+ dataset_info:
11
+ features:
12
+ - name: question
13
+ dtype: string
14
+ - name: verifiable_answer
15
+ dtype: string
16
+ - name: year
17
+ dtype: string
18
+ - name: grade
19
+ dtype: string
20
+ - name: full_answer
21
+ dtype: string
22
+ - name: solutions
23
+ list: string
24
+ - name: task_complexity
25
+ dtype: string
26
+ - name: olympiad
27
+ dtype: string
28
+ splits:
29
+ - name: train
30
+ num_bytes: 510955
31
+ num_examples: 331
32
+ download_size: 228445
33
+ dataset_size: 510955
34
+ configs:
35
+ - config_name: default
36
+ data_files:
37
+ - split: train
38
+ path: data/train-*
39
+ ---
40
+
41
+
42
+ # 🧮 T-Math
43
+ **T-Math** is a dataset of Russian math olympiad problems created to assess the reasoning capabilities of large language models (LLMs) in mathematics.
44
+ It includes 331 problems from the [All-Russian School Olympiad](https://vos.olimpiada.ru/) and the [Moscow Olympiad](https://mos.olimpiada.ru) for high school students, covering the period from 1998 to 2025.
45
+ The tasks and their ground-truth answers were extracted automatically and subsequently verified by human assessors.
46
+
47
+ Key features:
48
+ - Challenging problems that require multi-step reasoning (median completion length for Qwen3-32B is 16K tokens), sourced from top-tier Russian olympiads
49
+ - Easily verifiable: answers are numeric-only and checked using the `math_verify` library to compare mathematical expressions
50
+ - Not yet saturated, even by frontier reasoning models such as Gemini 2.5 Pro and DeepSeek R1
51
+ - Contains 331 samples — the largest Russian math olympiad-level benchmark — making it more statistically robust compared to smaller datasets like the 30-sample AIME benchmark
52
+
53
+
54
+ ## 📊 Evaluation Results
55
+
56
+ |Model|pass@1|
57
+ |--|--|
58
+ |o4-mini-high|**0.73**|
59
+ |DeepSeek-R1-0528|<ins>0.71</ins>|
60
+ |Gemini-2.5-Pro|0.70|
61
+ |Claude Sonnet 4|0.56|
62
+ |T-pro-it-2.0|0.54|
63
+ |Qwen3-32B|0.53|
64
+
65
+
66
+ ## 🗂️ Filtering procedure
67
+
68
+ The text was extracted from PDFs using [Qwen/Qwen2.5-VL-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-72B-Instruct). Tasks, along with their ground-truth and verifiable (numeric) answers, were extracted via LLM calls.
69
+ We filtered out invalid questions using an LLM based on the following criteria:
70
+ - Tasks requiring multiple answers
71
+ - Tasks without a single correct answer
72
+ - Theorem-like tasks where the main goal is proving a statement, making automatic verification non-trivial
73
+ - Tasks with non-numeric answers, to simplify answer comparison
74
+ - Tasks that cannot be solved without access to an accompanying image
75
+
76
+ Next, we removed tasks of moderate difficulty where Qwen3-8B achieved a 100% pass@16 rate, as they offer limited value for benchmarking reasoning.
77
+ Finally, both the questions and the verifiable answers were manually reviewed by assessors to ensure consistency with the original sources.
78
+
79
+
80
+ ## 🛠️ How to use
81
+ Add the following system prompt to guide the model to return the final answer in a \boxed{} tag, making it easier to parse:
82
+ ```
83
+ Решите следующую математическую задачу эффективно и ясно. Последняя строка вашего ответа должна иметь следующий формат:
84
+ 'Таким образом, окончательный ответ: $\boxed{ОТВЕТ}$.' (без кавычек), где ОТВЕТ - это просто окончательное число или выражение, решающее задачу.
85
+ Думайте шаг за шагом перед ответом.
86
+ ```
87
+
88
+ You can then use the following code snippet with the math_verify library to compare mathematical expressions:
89
+ ```python
90
+ from math_verify import LatexExtractionConfig, parse, verify
91
+ from latex2sympy2_extended import NormalizationConfig
92
+
93
+
94
+ def accuracy_reward(completion: str, solution: str) -> float:
95
+ """Reward function that checks if the completion matches the ground truth."""
96
+ # parse the gold solution (assumed to always succeed)
97
+ gold_parsed = parse(solution, extraction_mode="first_match")
98
+
99
+ # parse the model’s completion with the same LaTeX extraction settings
100
+ answer_parsed = parse(
101
+ completion,
102
+ extraction_config=[
103
+ LatexExtractionConfig(
104
+ normalization_config=NormalizationConfig(
105
+ nits=False,
106
+ malformed_operators=False,
107
+ basic_latex=True,
108
+ equations=True,
109
+ boxed="all",
110
+ units=True,
111
+ )
112
+ )
113
+ ],
114
+ extraction_mode="first_match",
115
+ )
116
+
117
+ # verify and return binary reward; on error, print and give 0.0
118
+ try:
119
+ return float(verify(gold_parsed, answer_parsed))
120
+ except Exception as e:
121
+ print(f"verify failed: {e}, answer: {answer_parsed}, gold: {gold_parsed}")
122
+ return 0.0
123
+
124
+
125
+ ```