MultiNRC / README.md
alexanderfabbri's picture
Upload data
f3bc4f3
metadata
language:
  - fr
  - es
  - zh
size_categories:
  - 1K<n<10K
configs:
  - config_name: default
    data_files:
      - split: test
        path: test/data-00000-of-00001.arrow

MultiNRC: Multilingual Native Reasoning Challenge

MultiNRC is a challenging evaluation benchmark for large language models, designed to assess multilingual reasoning ability in French, Spanish, and Chinese. Unlike existing benchmarks that simply translate English-centric content, MultiNRC consists of over 1,000 native-authored reasoning questions, crafted by native speakers to capture linguistic and cultural nuances.

Features

  • Languages: French, Spanish, Chinese
  • Categories:
    • Language-specific Linguistic Reasoning
    • Wordplay & Riddles
    • Cultural Reasoning & Traditions
    • Math Reasoning with Cultural Relevance
  • English Equivalents: For Cultural/Tradition and Math, human-translated English versions are provided for direct comparison.
  • Ground Truth Final Answers: Short, objective answers accompany each prompt for automatic evaluation.

Dataset Structure

Each entry includes:

  • A native-language prompt and answer (i18n_prompt, i18n_gtfa)
  • (For Math Reasoning and Cultural Reasoning category tasks) An English-equivalent prompt and answer (english_prompt, english_gtfa)
  • Metadata: task_id, language, category

Citation

If you use MultiNRC in your research, please cite:

@article{fabbri2025multinrc,
  title = {MultiNRC: A Challenging Native Multilingual Reasoning Evaluation Benchmark for LLMs},
  author = {Fabbri, Alexander R. and Mares, Diego and Flores, Jorge and Mankikar, Meher and Hernandez, Ernesto and Lee, Dean and Liu, Bing and Xing, Chen},
  year = {2025},
  note = {arXiv preprint, arXiv:XXXX.XXXXX}
}