The dataset viewer is not available for this split.
Error code: InfoError
Exception: ReadTimeout
Message: (ReadTimeoutError("HTTPSConnectionPool(host='huggingface.co', port=443): Read timed out. (read timeout=10)"), '(Request ID: a110d77b-84fa-41b9-82ea-a5b8311d6426)')
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 223, in compute_first_rows_from_streaming_response
info = get_dataset_config_info(path=dataset, config_name=config, token=hf_token)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 268, in get_dataset_config_info
builder = load_dataset_builder(
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1132, in load_dataset_builder
dataset_module = dataset_module_factory(
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1031, in dataset_module_factory
raise e1 from None
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1004, in dataset_module_factory
).get_module()
^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 632, in get_module
data_files = DataFilesDict.from_patterns(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/data_files.py", line 689, in from_patterns
else DataFilesList.from_patterns(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/data_files.py", line 592, in from_patterns
origin_metadata = _get_origin_metadata(data_files, download_config=download_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/data_files.py", line 506, in _get_origin_metadata
return thread_map(
^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/tqdm/contrib/concurrent.py", line 69, in thread_map
return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/tqdm/contrib/concurrent.py", line 51, in _executor_map
return list(tqdm_class(ex.map(fn, *iterables, chunksize=chunksize), **kwargs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/tqdm/std.py", line 1169, in __iter__
for obj in iterable:
^^^^^^^^
File "/usr/local/lib/python3.12/concurrent/futures/_base.py", line 619, in result_iterator
yield _result_or_cancel(fs.pop())
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/concurrent/futures/_base.py", line 317, in _result_or_cancel
return fut.result(timeout)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/concurrent/futures/_base.py", line 456, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
File "/usr/local/lib/python3.12/concurrent/futures/thread.py", line 59, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/data_files.py", line 485, in _get_single_origin_metadata
resolved_path = fs.resolve_path(data_file)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/hf_file_system.py", line 198, in resolve_path
repo_and_revision_exist, err = self._repo_and_revision_exist(repo_type, repo_id, revision)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/hf_file_system.py", line 125, in _repo_and_revision_exist
self._api.repo_info(
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/hf_api.py", line 2816, in repo_info
return method(
^^^^^^^
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/hf_api.py", line 2673, in dataset_info
r = get_session().get(path, headers=headers, timeout=timeout, params=params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/requests/sessions.py", line 602, in get
return self.request("GET", url, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/requests/sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/requests/sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/utils/_http.py", line 96, in send
return super().send(request, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/requests/adapters.py", line 690, in send
raise ReadTimeout(e, request=request)
requests.exceptions.ReadTimeout: (ReadTimeoutError("HTTPSConnectionPool(host='huggingface.co', port=443): Read timed out. (read timeout=10)"), '(Request ID: a110d77b-84fa-41b9-82ea-a5b8311d6426)')Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
AI-MO Olympiad Reference Dataset
This dataset contains a structured collection of Olympiad problems and their solutions, organized by competition. Contains high quality data, prioritizing "official" solutions to problems.
Structure
<competition name>/ # Problems and solutions from the International Mathematical Olympiad
├── raw/ # Raw problem/solution statements (.pdf)
│ ├── file1.pdf
│ ├── file2.pdf
├── download_script/ # the scripts used to download raw data
│ ├── download.py
├── md/ # .md files generated from raw/ files
│ ├── file1.md
│ ├── file2.md
├── segment_script/ # the scripts used to segment the data
│ ├── segment.py
└── segmented/ # .jsonl segmented data for easier processing
├── file1.jsonl
├── file2.jsonl
└── file3.jsonl
Each json in jsonl file follows this structure:
{
"problem": "string", // Mandatory: The problem statement in latex or markdown
"solution": "string", // Mandatory: The solution for the problem
"year": "int", // Optional: Year when the problem was presented
"problem_type": "string", // Optional: The mathematical domain of the problem. Here are the supported types:
//['Algebra', 'Geometry', 'Number Theory', 'Combinatorics', 'Calculus',
//'Inequalities', 'Logic and Puzzles', 'Other']
"question_type": "string", // Optional: The form or style of the mathematical problem.
// The supported classes are: ['MCQ', 'proof' or 'math-word-problem'].
// 'math-word-problem' is a problem with output.
"answer": "string", // Optional: final answer is the question_type is "math-word-problem".
"source": "string", // Optional: TODO:describe
"exam": "string", // Optional: TODO:describe
"difficulty": "int", // Optional: TODO:describe
"other": "...", // Optional: You can add other fields with metadata
}
Steps to collect data for formalization
1. Assign yourself a task
Check the tracker and assign yourself one line by updating columns:
- status: IN PROGRESS
- assignee: your name
2. Setup
Download data locally.
git lfs install
git clone git@hf.co:datasets/AI-MO/olympiads-ref
3. Find .pdf ressources.
First check if there are already available .pdf in https://huggingface.co/AI-MO/olympiads-0.1
- if yes upload them in
AI-MO/olympiads-ref/<competition>/raw/and continue to step 4. - if no, find sources in internet (preferably with official solution), download and upload in
AI-MO/olympiads-ref/<competition>/raw/
4. Find .md ressources.
First check if there are already available .pdf in https://huggingface.co/AI-MO/olympiads-0.1
- if yes upload in
AI-MO/olympiads-ref/<competition>/md/and continue to step 6. - if no, find sources in internet (preferably with official solution), download and upload in
AI-MO/olympiads-ref/<competition>/md/
5. Convert .pdf to .md using Mathpix
Use data_pipeline. Example:
python -m data_pipeline convert_to_md --method=pdf_to_md --input_dir="/home/marvin/workspace/olympiads-ref/IMO/raw" --output_dir="/home/marvin/workspace/olympiads-ref/IMO/md"
6. Find .jsonl ressources.
First check if there are already segmentaions available .jsonl in https://huggingface.co/datasets/AI-MO/olympiads-0.3. You can check if the segmentation has been done in this old tracker.
- if yes, check quality and upload in
AI-MO/olympiads-ref/<competition>/segmented/and continue to step 8. - if no, continue to step 7.
7. Segment the .md files into .jsonl
Write a segment.py that can be applied to your data (please do sanity checks!). Examples are this or that. Once you are fine with your segmentation upload the .jsonl in AI-MO/olympiads-ref/<competition>/segmented/ and the segment.py in AI-MO/olympiads-ref/<competition>/segment_script/.
Ask for a review.
8. Update the status in the trackers
Update the tracker with columns:
- status: DONE + a link to your generated data in hf
- problem_count: count of problems in data
- solution_count: count of solutions in data (different than problem_count since a problem can have several solutions)
- years: range of competition years covered in your data (so we can easily track if many years are missing)
- assignee: your name
Update the old tracker with this comumn:
- ref: color in green for the competition you segmented
9. Integrate the data in a base dataset
Create a ticket in git
Notes
- Image placeholders in the dataset (like:
) correspond to actual images stored in theimages.parquetfile.
- Downloads last month
- 30,380