UnicodeDecodeError when loading RedPajama-Data-1T-Sample dataset

#6
by yichuan-huang - opened

Hi,

I'm encountering an error when trying to load the RedPajama-Data-1T-Sample dataset using the datasets library. Here's the code I'm running:

from datasets import load_dataset
ds = load_dataset("togethercomputer/RedPajama-Data-1T-Sample", trust_remote_code=True)

This results in the following traceback:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/workspace/huangyichuan/miniconda3/envs/entigraph/lib/python3.10/site-packages/datasets/load.py", line 2594, in load_dataset
    builder_instance = load_dataset_builder(
  File "/workspace/huangyichuan/miniconda3/envs/entigraph/lib/python3.10/site-packages/datasets/load.py", line 2266, in load_dataset_builder
    dataset_module = dataset_module_factory(
  File "/workspace/huangyichuan/miniconda3/envs/entigraph/lib/python3.10/site-packages/datasets/load.py", line 1914, in dataset_module_factory
    raise e1 from None
  File "/workspace/huangyichuan/miniconda3/envs/entigraph/lib/python3.10/site-packages/datasets/load.py", line 1866, in dataset_module_factory
    can_load_config_from_parquet_export = "DEFAULT_CONFIG_NAME" not in f.read()
  File "/workspace/huangyichuan/miniconda3/envs/entigraph/lib/python3.10/codecs.py", line 322, in decode
    (result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position 1: invalid start byte

Environment details:

  • Python version: 3.10
  • datasets library version: 2.20.0
  • Operating system: Ubuntu 22.04.1
  • Environment: Miniconda virtual environment (entigraph)

What I've tried:

  • Ensured the datasets library is up-to-date.
  • Verified internet connectivity, as the dataset seems to be downloaded from the Hugging Face Hub.
  • Ran the code in a fresh Python session to avoid any state-related issues.

Questions:

  1. Has anyone encountered this UnicodeDecodeError when loading this dataset?
  2. Could this be related to the dataset's metadata or configuration files being encoded incorrectly?
  3. Are there any workarounds, such as specifying a different encoding or skipping problematic files?

Any insights or suggestions would be greatly appreciated! Thanks in advance for your help.

Sign up or log in to comment