Datasets:
File size: 4,714 Bytes
29b9759 0230867 29b9759 ed7fb2f 29b9759 ed7fb2f 29b9759 ed7fb2f 29b9759 ed7fb2f 0230867 ed7fb2f 0230867 ed7fb2f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 |
---
language:
- ko
- en
license: apache-2.0
task_categories:
- automatic-speech-recognition
dataset_info:
features:
- name: audio
dtype: audio
- name: text
dtype: string
- name: text_normalized
dtype: string
- name: text_pier_labeled
dtype: string
- name: cs_level
dtype: string
- name: cs_levels_all
dtype: string
- name: category
dtype: string
- name: loanwords
dtype: string
- name: sample_id
dtype: string
splits:
- name: test
num_bytes: 256512910
num_examples: 1121
download_size: 235090892
dataset_size: 256512910
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
tags:
- speech
- recognition
- code-switching
---
# HiKE: Hierarchical Evaluation Framework for Korean-English Code-Switching Speech Recognition
> [Gio Paik](https://sites.google.com/view/giopaik)\*, [Yongbeom Kim](https://bayle0627.github.io/), [Soungmin Lee](https://minovermax.github.io/), [Sangmin Ahn](https://www.linkedin.com/in/sangmin-ahn-0656ab1b1/)†, and [Chanwoo Kim](https://www.linkedin.com/in/chanwkim)†, *Under Review*
> \* Corresponding Author, † Equal Contribution
[**✨ Code**](https://github.com/ThetaOne-AI/HiKE) | [**🤗 Dataset**](https://huggingface.co/datasets/thetaone-ai/HiKE) | [**📖 Paper**](https://arxiv.org/abs/2509.24613)
## Introduction
HiKE is the first Korean-English Code-Switching (CS) Automatic Speech Recognition (ASR) benchmark composed of high-quality, natural CS data across various topics. We use **Mixed Error Rate (MER)** and **Point of Interest Error Rate (PIER)** [1] to precisely evaluate the models' CS ASR capability.
Experimental results show that all multilingual ASR models exhibit significantly higher error rates on code-switching data, and that their CS-ASR capabilities can be improved through fine-tuning.
For further details, please refer to [our paper](https://arxiv.org/abs/2509.24613).
[1] Ugan et al., [“PIER: A Novel Metric for Evaluating What Matters in Code-Switching”](https://arxiv.org/abs/2501.09512), ICASSP 2025
### Hierarchical CS-Level Labels
To provide more fine-grained comparison of model performance on different forms of code-switching, we labeled each utterance according to the following levels:
- Word-level CS: Code-switching that occurs at the word level, typically as the substitution of a single noun or adjective.
- Phrase-level CS: Occurs when a multi-word phrase within a sentence appears in another language.
- Sentence-level CS: The alternation between languages on a sentence-by-sentence basis.
### Loanword Labels
Loanwords are words adopted from a foreign language and adapted to the phonology and orthography of the new language. For example, the Korean loanword **'버스' [bəs]** and the English word **'bus' [bʌs]** are pronounced almost identically and can be used interchangeably in a CS context. To avoid this problem, we meticulously labeled all loanwords contained in our dataset.
## Sample Usage
### Install Dependencies
```sh
git clone --recurse-submodules https://github.com/ThetaOne-AI/HiKE
cd HiKE
pip install -r requirements.txt
apt-get update && apt-get install -y ffmpeg # install ffmpeg if needed
```
### Run Evaluation
```sh
bash scripts/evaluate_whisper.sh
# or
python src/main.py --model whisper --model_name openai/whisper-large --batch_size 8
```
The results will be saved in `./outputs`.
### Evaluate Your Model
- Implement a class that follows the `BaseASR` interface in `src/models/your_model.py`, and register it in `src/main.py`.
Create `src/models/your_model.py`:
```python
from typing import List, Dict, Any
from src.models import BaseASR
class YourModel(BaseASR):
def __init__(self, model_name: str = "your/model-or-config"):
self.model_name = model_name
# TODO: load your model or client here
def generate(self, input, batch_size: int | None = None, **kwargs) -> List[Dict[str, Any]]:
if not isinstance(input, list):
input = [input]
return [{"text": your_transcribe_fn(x)} for x in input]
```
Register in `src/main.py`:
```python
elif model == "your_model":
from models.your_model import YourModel
asr = YourModel(model_name)
```
Run:
```sh
python src/main.py --model your_model --model_name your/model-or-name
```
## Citation
```
@misc{paik2025hike,
title={{HiKE}: Hierarchical Evaluation Framework for Korean-English Code-Switching Speech Recognition},
author={Gio Paik and Yongbeom Kim and Soungmin Lee and Sangmin Ahn and Chanwoo Kim},
year={2025},
eprint={2509.24613},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2509.24613},
}
``` |