Datasets:
Yougen/wuw_dataset
Wake-Up-Word (WUW) speech dataset, packed as WebDataset tar shards.
The input is a Kaldi-style data directory
(wav.scp, text, utt2spk, utt2dur, segments), where multiple
utterances share a long recording via the segments file.
To avoid duplicating audio, each tar sample corresponds to one full
recording. The utterance-level metadata (id / start / end / text / spk / duration) is stored in a JSON list inside that sample.
Downstream consumers slice the decoded waveform by [start*sr : end*sr]
themselves.
Layout
data/
<split>/
metadata.csv # flattened utterance-level table
audio/
<split>-000.tar
<split>-001.tar
...
Shard counts:
train: 1423 tar shard(s)
Inside each tar, every sample is a pair sharing a unique key:
<key>.wav # raw recording bytes (original format preserved)
<key>.json # {"rec_id":..., "rel_path":..., "wav_format":"wav",
# "segments": [
# {"id":..., "start":..., "end":...,
# "text":..., "spk":..., "duration":...},
# ...
# ]}
metadata.csv columns (one row per utterance):
key, shard, rec_id, rel_path, wav_format, id, start, end, duration, text, spk
Loading
from datasets import load_dataset
ds = load_dataset("Yougen/wuw_dataset")
ex = ds["train"][0]
wav_array = ex["wav"]["array"]
sr = ex["wav"]["sampling_rate"]
for seg in ex["json"]["segments"]:
s = int(seg["start"] * sr)
e = int(seg["end"] * sr)
print(seg["id"], seg["text"], wav_array[s:e].shape)
Streaming:
ds = load_dataset("Yougen/wuw_dataset", streaming=True)
for example in ds["train"]:
print(example["__key__"], len(example["json"]["segments"]))
break
- Downloads last month
- 12