id
int64
599M
3.26B
number
int64
1
7.7k
title
stringlengths
1
290
body
stringlengths
0
228k
βŒ€
state
stringclasses
2 values
html_url
stringlengths
46
51
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-07-23 08:04:53
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-07-23 18:53:44
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-07-23 16:44:42
βŒ€
user
dict
labels
listlengths
0
4
is_pull_request
bool
2 classes
comments
listlengths
0
0
2,549,882,529
7,173
Release: 3.0.1
null
closed
https://github.com/huggingface/datasets/pull/7173
2024-09-26T08:25:54
2024-09-26T08:28:29
2024-09-26T08:26:03
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,549,781,691
7,172
Add torchdata as a regular test dependency
Add `torchdata` as a regular test dependency. Note that previously, `torchdata` was installed from their repo and current main branch (0.10.0.dev) requires Python>=3.9. Also note they made a recent release: 0.8.0 on Jul 31, 2024. Fix #7171.
closed
https://github.com/huggingface/datasets/pull/7172
2024-09-26T07:45:55
2024-09-26T08:12:12
2024-09-26T08:05:40
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,549,738,919
7,171
CI is broken: No solution found when resolving dependencies
See: https://github.com/huggingface/datasets/actions/runs/11046967444/job/30687294297 ``` Run uv pip install --system -r additional-tests-requirements.txt --no-deps Γ— No solution found when resolving dependencies: ╰─▢ Because the current Python version (3.8.18) does not satisfy Python>=3.9 and torchdata==0.10.0a0+1a98f21 depends on Python>=3.9, we can conclude that torchdata==0.10.0a0+1a98f21 cannot be used. And because only torchdata==0.10.0a0+1a98f21 is available and you require torchdata, we can conclude that your requirements are unsatisfiable. Error: Process completed with exit code 1. ```
closed
https://github.com/huggingface/datasets/issues/7171
2024-09-26T07:24:58
2024-09-26T08:05:41
2024-09-26T08:05:41
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
2,546,944,016
7,170
Support JSON lines with missing columns
Support JSON lines with missing columns. Fix #7169. The implemented test raised: ``` datasets.table.CastError: Couldn't cast age: int64 to {'age': Value(dtype='int32', id=None), 'name': Value(dtype='string', id=None)} because column names don't match ``` Related to: - #7160 - #7162
closed
https://github.com/huggingface/datasets/pull/7170
2024-09-25T05:08:15
2024-09-26T06:42:09
2024-09-26T06:42:07
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,546,894,076
7,169
JSON lines with missing columns raise CastError
JSON lines with missing columns raise CastError: > CastError: Couldn't cast ... to ... because column names don't match Related to: - #7159 - #7161
closed
https://github.com/huggingface/datasets/issues/7169
2024-09-25T04:43:28
2024-09-26T06:42:08
2024-09-26T06:42:08
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
2,546,710,631
7,168
sd1.5 diffusers controlnet training script gives new error
### Describe the bug This will randomly pop up during training now ``` Traceback (most recent call last): File "/workspace/diffusers/examples/controlnet/train_controlnet.py", line 1192, in <module> main(args) File "/workspace/diffusers/examples/controlnet/train_controlnet.py", line 1041, in main for step, batch in enumerate(train_dataloader): File "/usr/local/lib/python3.11/dist-packages/accelerate/data_loader.py", line 561, in __iter__ next_batch = next(dataloader_iter) ^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/torch/utils/data/dataloader.py", line 630, in __next__ data = self._next_data() ^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/torch/utils/data/dataloader.py", line 673, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/torch/utils/data/_utils/fetch.py", line 50, in fetch data = self.dataset.__getitems__(possibly_batched_index) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/datasets/arrow_dataset.py", line 2746, in __getitems__ batch = self.__getitem__(keys) ^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/datasets/arrow_dataset.py", line 2742, in __getitem__ return self._getitem(key) ^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/datasets/arrow_dataset.py", line 2727, in _getitem formatted_output = format_table( ^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/datasets/formatting/formatting.py", line 639, in format_table return formatter(pa_table, query_type=query_type) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/datasets/formatting/formatting.py", line 407, in __call__ return self.format_batch(pa_table) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/datasets/formatting/formatting.py", line 521, in format_batch batch = self.python_features_decoder.decode_batch(batch) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/datasets/formatting/formatting.py", line 228, in decode_batch return self.features.decode_batch(batch) if self.features else batch ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/datasets/features/features.py", line 2084, in decode_batch [ File "/usr/local/lib/python3.11/dist-packages/datasets/features/features.py", line 2085, in <listcomp> decode_nested_example(self[column_name], value, token_per_repo_id=token_per_repo_id) File "/usr/local/lib/python3.11/dist-packages/datasets/features/features.py", line 1403, in decode_nested_example return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/datasets/features/image.py", line 188, in decode_example image.load() # to avoid "Too many open files" errors ``` ### Steps to reproduce the bug Train on diffusers sd1.5 controlnet example script This will pop up randomly, you can see in wandb below when i manually resume run everytime this error appears ![image](https://github.com/user-attachments/assets/87e9a6af-cb3c-4398-82e7-d6a90add8d31) ### Expected behavior Training to continue without above error ### Environment info - datasets version: 3.0.0 - Platform: Linux-6.5.0-44-generic-x86_64-with-glibc2.35 - Python version: 3.11.9 - huggingface_hub version: 0.25.1 - PyArrow version: 17.0.0 - Pandas version: 2.2.3 - fsspec version: 2024.6.1 Training on 4090
closed
https://github.com/huggingface/datasets/issues/7168
2024-09-25T01:42:49
2024-09-30T05:24:03
2024-09-30T05:24:02
{ "login": "Night1099", "id": 90132896, "type": "User" }
[]
false
[]
2,546,708,014
7,167
Error Mapping on sd3, sdxl and upcoming flux controlnet training scripts in diffusers
### Describe the bug ``` Map: 6%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 8000/138120 [19:27<5:16:36, 6.85 examples/s] Traceback (most recent call last): File "/workspace/diffusers/examples/controlnet/train_controlnet_sd3.py", line 1416, in <module> main(args) File "/workspace/diffusers/examples/controlnet/train_controlnet_sd3.py", line 1132, in main train_dataset = train_dataset.map(compute_embeddings_fn, batched=True, new_fingerprint=new_fingerprint) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/datasets/arrow_dataset.py", line 560, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/datasets/arrow_dataset.py", line 3035, in map for rank, done, content in Dataset._map_single(**dataset_kwargs): File "/usr/local/lib/python3.11/dist-packages/datasets/arrow_dataset.py", line 3461, in _map_single writer.write_batch(batch) File "/usr/local/lib/python3.11/dist-packages/datasets/arrow_writer.py", line 567, in write_batch self.write_table(pa_table, writer_batch_size) File "/usr/local/lib/python3.11/dist-packages/datasets/arrow_writer.py", line 579, in write_table pa_table = pa_table.combine_chunks() ^^^^^^^^^^^^^^^^^^^^^^^^^ File "pyarrow/table.pxi", line 4387, in pyarrow.lib.Table.combine_chunks File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: offset overflow while concatenating arrays Traceback (most recent call last): File "/usr/local/bin/accelerate", line 8, in <module> sys.exit(main()) ^^^^^^ File "/usr/local/lib/python3.11/dist-packages/accelerate/commands/accelerate_cli.py", line 48, in main args.func(args) File "/usr/local/lib/python3.11/dist-packages/accelerate/commands/launch.py", line 1174, in launch_command simple_launcher(args) File "/usr/local/lib/python3.11/dist-packages/accelerate/commands/launch.py", line 769, in simple_launcher ``` ### Steps to reproduce the bug The dataset has no problem training on sd1.5 controlnet train script ### Expected behavior Script not randomly erroing with error above ### Environment info - `datasets` version: 3.0.0 - Platform: Linux-6.5.0-44-generic-x86_64-with-glibc2.35 - Python version: 3.11.9 - `huggingface_hub` version: 0.25.1 - PyArrow version: 17.0.0 - Pandas version: 2.2.3 - `fsspec` version: 2024.6.1 training on A100
closed
https://github.com/huggingface/datasets/issues/7167
2024-09-25T01:39:51
2024-09-30T05:28:15
2024-09-30T05:28:04
{ "login": "Night1099", "id": 90132896, "type": "User" }
[]
false
[]
2,545,608,736
7,166
fix docstring code example for distributed shuffle
close https://github.com/huggingface/datasets/issues/7163
closed
https://github.com/huggingface/datasets/pull/7166
2024-09-24T14:39:54
2024-09-24T14:42:41
2024-09-24T14:40:14
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,544,972,541
7,165
fix increase_load_count
it was failing since 3.0 and therefore not updating download counts on HF or in our dashboard
closed
https://github.com/huggingface/datasets/pull/7165
2024-09-24T10:14:40
2024-09-24T17:31:07
2024-09-24T13:48:00
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,544,757,297
7,164
fsspec.exceptions.FSTimeoutError when downloading dataset
### Describe the bug I am trying to download the `librispeech_asr` `clean` dataset, which results in a `FSTimeoutError` exception after downloading around 61% of the data. ### Steps to reproduce the bug ``` import datasets datasets.load_dataset("librispeech_asr", "clean") ``` The output is as follows: > Downloading data: 61%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 3.92G/6.39G [05:00<03:06, 13.2MB/s]Traceback (most recent call last): > File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/fsspec/asyn.py", line 56, in _runner > result[0] = await coro > ^^^^^^^^^^ > File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/fsspec/implementations/http.py", line 262, in _get_file > chunk = await r.content.read(chunk_size) > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/aiohttp/streams.py", line 393, in read > await self._wait("read") > File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/aiohttp/streams.py", line 311, in _wait > with self._timer: > ^^^^^^^^^^^ > File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/aiohttp/helpers.py", line 713, in __exit__ > raise asyncio.TimeoutError from None > TimeoutError > > The above exception was the direct cause of the following exception: > > Traceback (most recent call last): > File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/load_dataset.py", line 3, in <module> > datasets.load_dataset("librispeech_asr", "clean") > File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/load.py", line 2096, in load_dataset > builder_instance.download_and_prepare( > File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/builder.py", line 924, in download_and_prepare > self._download_and_prepare( > File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/builder.py", line 1647, in _download_and_prepare > super()._download_and_prepare( > File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/builder.py", line 977, in _download_and_prepare > split_generators = self._split_generators(dl_manager, **split_generators_kwargs) > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > File "/Users/Timon/.cache/huggingface/modules/datasets_modules/datasets/librispeech_asr/2712a8f82f0d20807a56faadcd08734f9bdd24c850bb118ba21ff33ebff0432f/librispeech_asr.py", line 115, in _split_generators > archive_path = dl_manager.download(_DL_URLS[self.config.name]) > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/download/download_manager.py", line 159, in download > downloaded_path_or_paths = map_nested( > ^^^^^^^^^^^ > File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/utils/py_utils.py", line 512, in map_nested > _single_map_nested((function, obj, batched, batch_size, types, None, True, None)) > File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/utils/py_utils.py", line 380, in _single_map_nested > return [mapped_item for batch in iter_batched(data_struct, batch_size) for mapped_item in function(batch)] > ^^^^^^^^^^^^^^^ > File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/download/download_manager.py", line 216, in _download_batched > self._download_single(url_or_filename, download_config=download_config) > File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/download/download_manager.py", line 225, in _download_single > out = cached_path(url_or_filename, download_config=download_config) > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/utils/file_utils.py", line 205, in cached_path > output_path = get_from_cache( > ^^^^^^^^^^^^^^^ > File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/utils/file_utils.py", line 415, in get_from_cache > fsspec_get(url, temp_file, storage_options=storage_options, desc=download_desc, disable_tqdm=disable_tqdm) > File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/utils/file_utils.py", line 334, in fsspec_get > fs.get_file(path, temp_file.name, callback=callback) > File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/fsspec/asyn.py", line 118, in wrapper > return sync(self.loop, func, *args, **kwargs) > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/fsspec/asyn.py", line 101, in sync > raise FSTimeoutError from return_result > fsspec.exceptions.FSTimeoutError > Downloading data: 61%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 3.92G/6.39G [05:00<03:09, 13.0MB/s] ### Expected behavior Complete the download ### Environment info Python version 3.12.6 Dependencies: > dependencies = [ > "accelerate>=0.34.2", > "datasets[audio]>=3.0.0", > "ipython>=8.18.1", > "librosa>=0.10.2.post1", > "torch>=2.4.1", > "torchaudio>=2.4.1", > "transformers>=4.44.2", > ] MacOS 14.6.1 (23G93)
open
https://github.com/huggingface/datasets/issues/7164
2024-09-24T08:45:05
2025-04-09T22:25:56
null
{ "login": "timonmerk", "id": 38216460, "type": "User" }
[]
false
[]
2,542,361,234
7,163
Set explicit seed in iterable dataset ddp shuffling example
### Describe the bug In the examples section of the iterable dataset docs https://huggingface.co/docs/datasets/en/package_reference/main_classes#datasets.IterableDataset the ddp example shuffles without seeding ```python from datasets.distributed import split_dataset_by_node ids = ds.to_iterable_dataset(num_shards=512) ids = ids.shuffle(buffer_size=10_000) # will shuffle the shards order and use a shuffle buffer when you start iterating ids = split_dataset_by_node(ds, world_size=8, rank=0) # will keep only 512 / 8 = 64 shards from the shuffled lists of shards when you start iterating dataloader = torch.utils.data.DataLoader(ids, num_workers=4) # will assign 64 / 4 = 16 shards from this node's list of shards to each worker when you start iterating for example in ids: pass ``` This code would - I think - raise an error due to the lack of an explicit seed: https://github.com/huggingface/datasets/blob/2eb4edb97e1a6af2ea62738ec58afbd3812fc66e/src/datasets/iterable_dataset.py#L1707-L1711 ### Steps to reproduce the bug Run example code ### Expected behavior Add explicit seeding to example code ### Environment info latest datasets
closed
https://github.com/huggingface/datasets/issues/7163
2024-09-23T11:34:06
2024-09-24T14:40:15
2024-09-24T14:40:15
{ "login": "alex-hh", "id": 5719745, "type": "User" }
[]
false
[]
2,542,323,382
7,162
Support JSON lines with empty struct
Support JSON lines with empty struct. Fix #7161. Related to: - #7160
closed
https://github.com/huggingface/datasets/pull/7162
2024-09-23T11:16:12
2024-09-23T11:30:08
2024-09-23T11:30:06
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,541,971,931
7,161
JSON lines with empty struct raise ArrowTypeError
JSON lines with empty struct raise ArrowTypeError: struct fields don't match or are in the wrong order See example: https://huggingface.co/datasets/wikimedia/structured-wikipedia/discussions/5 > ArrowTypeError: struct fields don't match or are in the wrong order: Input fields: struct<> output fields: struct<pov_count: int64, update_count: int64, citation_needed_count: int64> Related to: - #7159
closed
https://github.com/huggingface/datasets/issues/7161
2024-09-23T08:48:56
2024-09-25T04:43:44
2024-09-23T11:30:07
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
2,541,877,813
7,160
Support JSON lines with missing struct fields
Support JSON lines with missing struct fields. Fix #7159. The implemented test raised: ``` TypeError: Couldn't cast array of type struct<age: int64> to {'age': Value(dtype='int32', id=None), 'name': Value(dtype='string', id=None)} ```
closed
https://github.com/huggingface/datasets/pull/7160
2024-09-23T08:04:09
2024-09-23T11:09:19
2024-09-23T11:09:17
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,541,865,613
7,159
JSON lines with missing struct fields raise TypeError: Couldn't cast array
JSON lines with missing struct fields raise TypeError: Couldn't cast array of type. See example: https://huggingface.co/datasets/wikimedia/structured-wikipedia/discussions/5 One would expect that the struct missing fields are added with null values.
closed
https://github.com/huggingface/datasets/issues/7159
2024-09-23T07:57:58
2024-10-21T08:07:07
2024-09-23T11:09:18
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
2,541,494,765
7,158
google colab ex
null
closed
https://github.com/huggingface/datasets/pull/7158
2024-09-23T03:29:50
2024-12-20T16:41:07
2024-12-20T16:41:07
{ "login": "docfhsp", "id": 157789664, "type": "User" }
[]
true
[]
2,540,354,890
7,157
Fix zero proba interleave datasets
fix https://github.com/huggingface/datasets/issues/7147
closed
https://github.com/huggingface/datasets/pull/7157
2024-09-21T15:19:14
2024-09-24T14:33:54
2024-09-24T14:33:54
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,539,360,617
7,156
interleave_datasets resets shuffle state
### Describe the bug ``` import datasets import torch.utils.data def gen(shards): yield {"shards": shards} def main(): dataset = datasets.IterableDataset.from_generator( gen, gen_kwargs={'shards': list(range(25))} ) dataset = dataset.shuffle(buffer_size=1) dataset = datasets.interleave_datasets( [dataset, dataset], probabilities=[1, 0], stopping_strategy="all_exhausted" ) dataloader = torch.utils.data.DataLoader( dataset, batch_size=8, num_workers=8, ) for i, batch in enumerate(dataloader): print(batch) if i >= 10: break if __name__ == "__main__": main() ``` ### Steps to reproduce the bug Run the script, it will output ``` {'shards': [tensor([ 0, 8, 16, 24, 0, 8, 16, 24])]} {'shards': [tensor([ 1, 9, 17, 1, 9, 17, 1, 9])]} {'shards': [tensor([ 2, 10, 18, 2, 10, 18, 2, 10])]} {'shards': [tensor([ 3, 11, 19, 3, 11, 19, 3, 11])]} {'shards': [tensor([ 4, 12, 20, 4, 12, 20, 4, 12])]} {'shards': [tensor([ 5, 13, 21, 5, 13, 21, 5, 13])]} {'shards': [tensor([ 6, 14, 22, 6, 14, 22, 6, 14])]} {'shards': [tensor([ 7, 15, 23, 7, 15, 23, 7, 15])]} {'shards': [tensor([ 0, 8, 16, 24, 0, 8, 16, 24])]} {'shards': [tensor([17, 1, 9, 17, 1, 9, 17, 1])]} {'shards': [tensor([18, 2, 10, 18, 2, 10, 18, 2])]} ``` ### Expected behavior The shards should be shuffled. ### Environment info - `datasets` version: 3.0.0 - Platform: Linux-5.15.153.1-microsoft-standard-WSL2-x86_64-with-glibc2.35 - Python version: 3.10.12 - `huggingface_hub` version: 0.25.0 - PyArrow version: 17.0.0 - Pandas version: 2.0.3 - `fsspec` version: 2023.6.0
open
https://github.com/huggingface/datasets/issues/7156
2024-09-20T17:57:54
2025-03-18T10:56:25
null
{ "login": "jonathanasdf", "id": 511073, "type": "User" }
[]
false
[]
2,533,641,870
7,155
Dataset viewer not working! Failure due to more than 32 splits.
Hello guys, I have a dataset and I didn't know I couldn't upload more than 32 splits. Now, my dataset viewer is not working. I don't have the dataset locally on my node anymore and recreating would take a week. And I have to publish the dataset coming Monday. I read about the practice, how I can resolve it and avoid this issue in the future. But, at the moment I need a hard fix for two of my datasets. And I don't want to mess or change anything and allow everyone in public to see the dataset and interact with it. Can you please help me? https://huggingface.co/datasets/laion/Wikipedia-X https://huggingface.co/datasets/laion/Wikipedia-X-Full
closed
https://github.com/huggingface/datasets/issues/7155
2024-09-18T12:43:21
2024-09-18T13:20:03
2024-09-18T13:20:03
{ "login": "sleepingcat4", "id": 81933585, "type": "User" }
[]
false
[]
2,532,812,323
7,154
Support ndjson data files
Support `ndjson` (Newline Delimited JSON) data files. Fix #7153.
closed
https://github.com/huggingface/datasets/pull/7154
2024-09-18T06:10:10
2024-09-19T11:25:17
2024-09-19T11:25:14
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,532,788,555
7,153
Support data files with .ndjson extension
### Feature request Support data files with `.ndjson` extension. ### Motivation We already support data files with `.jsonl` extension. ### Your contribution I am opening a PR.
closed
https://github.com/huggingface/datasets/issues/7153
2024-09-18T05:54:45
2024-09-19T11:25:15
2024-09-19T11:25:15
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,527,577,048
7,151
Align filename prefix splitting with WebDataset library
Align filename prefix splitting with WebDataset library. This PR uses the same `base_plus_ext` function as the one used by the `webdataset` library. Fix #7150. Related to #7144.
closed
https://github.com/huggingface/datasets/pull/7151
2024-09-16T06:07:39
2024-09-16T15:26:36
2024-09-16T15:26:34
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,527,571,175
7,150
WebDataset loader splits keys differently than WebDataset library
As reported by @ragavsachdeva (see discussion here: https://github.com/huggingface/datasets/pull/7144#issuecomment-2348307792), our webdataset loader is not aligned with the `webdataset` library when splitting keys from filenames. For example, we get a different key splitting for filename `/some/path/22.0/1.1.png`: - datasets library: `/some/path/22` and `0/1.1.png` - webdataset library: `/some/path/22.0/1`, `1.png` ```python import webdataset as wds wds.tariterators.base_plus_ext("/some/path/22.0/1.1.png") # ('/some/path/22.0/1', '1.png') ```
closed
https://github.com/huggingface/datasets/issues/7150
2024-09-16T06:02:47
2024-09-16T15:26:35
2024-09-16T15:26:35
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "bug", "color": "d73a4a" } ]
false
[]
2,524,497,448
7,149
Datasets Unknown Keyword Argument Error - task_templates
### Describe the bug Issue ```python from datasets import load_dataset examples = load_dataset('facebook/winoground', use_auth_token=<YOUR USER ACCESS TOKEN>) ``` Gives error ``` TypeError: DatasetInfo.__init__() got an unexpected keyword argument 'task_templates' ``` A simple downgrade to lower `datasets v 2.21.0` solves it. ### Steps to reproduce the bug 1. `pip install datsets` 2. ```python from datasets import load_dataset examples = load_dataset('facebook/winoground', use_auth_token=<YOUR USER ACCESS TOKEN>) ``` ### Expected behavior Should load the dataset correctly. ### Environment info - Datasets version `3.0.0` - `transformers` version: 4.45.0.dev0 - Platform: Linux-6.8.0-40-generic-x86_64-with-glibc2.35 - Python version: 3.12.4 - Huggingface_hub version: 0.24.6 - Safetensors version: 0.4.5 - Accelerate version: 0.35.0.dev0 - Accelerate config: not found - PyTorch version (GPU?): 2.4.1+cu121 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes
closed
https://github.com/huggingface/datasets/issues/7149
2024-09-13T10:30:57
2025-03-06T07:11:55
2024-09-13T14:10:48
{ "login": "varungupta31", "id": 51288316, "type": "User" }
[]
false
[]
2,523,833,413
7,148
Bug: Error when downloading mteb/mtop_domain
### Describe the bug When downloading the dataset "mteb/mtop_domain", ran into the following error: ``` Traceback (most recent call last): File "/share/project/xzy/test/test_download.py", line 3, in <module> data = load_dataset("mteb/mtop_domain", "en", trust_remote_code=True) File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 2606, in load_dataset builder_instance = load_dataset_builder( File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 2277, in load_dataset_builder dataset_module = dataset_module_factory( File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 1923, in dataset_module_factory raise e1 from None File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 1896, in dataset_module_factory ).get_module() File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 1507, in get_module local_path = self.download_loading_script() File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 1467, in download_loading_script return cached_path(file_path, download_config=download_config) File "/opt/conda/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 211, in cached_path output_path = get_from_cache( File "/opt/conda/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 689, in get_from_cache fsspec_get( File "/opt/conda/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 395, in fsspec_get fs.get_file(path, temp_file.name, callback=callback) File "/opt/conda/lib/python3.10/site-packages/huggingface_hub/hf_file_system.py", line 648, in get_file http_get( File "/opt/conda/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 578, in http_get raise EnvironmentError( OSError: Consistency check failed: file should be of size 2191 but has size 2190 ((…)ets/mteb/mtop_domain@main/mtop_domain.py). We are sorry for the inconvenience. Please retry with `force_download=True`. If the issue persists, please let us know by opening an issue on https://github.com/huggingface/huggingface_hub. ``` Try to download through HF datasets directly but got the same error as above. ```python from datasets import load_dataset data = load_dataset("mteb/mtop_domain", "en") ``` ### Steps to reproduce the bug ```python from datasets import load_dataset data = load_dataset("mteb/mtop_domain", "en", force_download=True) ``` With and without `force_download=True` both ran into the same error. ### Expected behavior Should download the dataset successfully. ### Environment info - datasets version: 2.21.0 - huggingface-hub version: 0.24.6
closed
https://github.com/huggingface/datasets/issues/7148
2024-09-13T04:09:39
2024-09-14T15:11:35
2024-09-14T15:11:35
{ "login": "ZiyiXia", "id": 77958037, "type": "User" }
[]
false
[]
2,523,129,465
7,147
IterableDataset strange deadlock
### Describe the bug ``` import datasets import torch.utils.data num_shards = 1024 def gen(shards): for shard in shards: if shard < 25: yield {"shard": shard} def main(): dataset = datasets.IterableDataset.from_generator( gen, gen_kwargs={"shards": list(range(num_shards))}, ) dataset = dataset.shuffle(buffer_size=1) dataset = datasets.interleave_datasets( [dataset, dataset], probabilities=[1, 0], stopping_strategy="all_exhausted" ) dataset = dataset.shuffle(buffer_size=1) dataloader = torch.utils.data.DataLoader( dataset, batch_size=8, num_workers=8, ) for i, batch in enumerate(dataloader): print(batch) if i >= 10: break print() if __name__ == "__main__": for _ in range(100): main() ``` ### Steps to reproduce the bug Running the script above, at some point it will freeze. - Changing `num_shards` from 1024 to 25 avoids the issue - Commenting out the final shuffle avoids the issue - Commenting out the interleave_datasets call avoids the issue As an aside, if you comment out just the final shuffle, the output from interleave_datasets is not shuffled at all even though there's the shuffle before it. So something about that shuffle config is not being propagated to interleave_datasets. ### Expected behavior The script should not freeze. ### Environment info - `datasets` version: 3.0.0 - Platform: macOS-14.6.1-arm64-arm-64bit - Python version: 3.12.5 - `huggingface_hub` version: 0.24.7 - PyArrow version: 17.0.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.6.1 I observed this with 2.21.0 initially, then tried upgrading to 3.0.0 and could still repro.
closed
https://github.com/huggingface/datasets/issues/7147
2024-09-12T18:59:33
2024-09-23T09:32:27
2024-09-21T17:37:34
{ "login": "jonathanasdf", "id": 511073, "type": "User" }
[]
false
[]
2,519,820,162
7,146
Set dev version
null
closed
https://github.com/huggingface/datasets/pull/7146
2024-09-11T13:53:27
2024-09-12T04:34:08
2024-09-12T04:34:06
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,519,789,724
7,145
Release: 3.0.0
null
closed
https://github.com/huggingface/datasets/pull/7145
2024-09-11T13:41:47
2024-09-11T13:48:42
2024-09-11T13:48:41
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,519,393,560
7,144
Fix key error in webdataset
I was running into ``` example[field_name] = {"path": example["__key__"] + "." + field_name, "bytes": example[field_name]} KeyError: 'png' ``` The issue is that a filename may have multiple "." e.g. `22.05.png`. Changing `split` to `rsplit` fixes it. Related https://github.com/huggingface/datasets/issues/6880
closed
https://github.com/huggingface/datasets/pull/7144
2024-09-11T10:50:17
2025-01-15T10:32:43
2024-09-13T04:31:37
{ "login": "ragavsachdeva", "id": 26804893, "type": "User" }
[]
true
[]
2,512,327,211
7,143
Modify add_column() to optionally accept a FeatureType as param
Fix #7142. **Before (Add + Cast)**: ``` from datasets import load_dataset, Value ds = load_dataset("rotten_tomatoes", split="test") lst = [i for i in range(len(ds))] ds = ds.add_column("new_col", lst) # Assigns int64 to new_col by default print(ds.features) ds = ds.cast_column("new_col", Value(dtype="uint16", id=None)) print(ds.features) ``` **Before (Numpy Workaround)**: ``` from datasets import load_dataset import numpy as np ds = load_dataset("rotten_tomatoes", split="test") lst = [i for i in range(len(ds))] ds = ds.add_column("new_col", np.array(lst, dtype=np.uint16)) print(ds.features) ``` **After**: ``` from datasets import load_dataset, Value ds = load_dataset("rotten_tomatoes", split="test") lst = [i for i in range(len(ds))] val = Value(dtype="uint16", id=None)) ds = ds.add_column("new_col", lst, feature=val) print(ds.features) ```
closed
https://github.com/huggingface/datasets/pull/7143
2024-09-08T10:56:57
2024-09-17T06:01:23
2024-09-16T15:11:01
{ "login": "varadhbhatnagar", "id": 20443618, "type": "User" }
[]
true
[]
2,512,244,938
7,142
Specifying datatype when adding a column to a dataset.
### Feature request There should be a way to specify the datatype of a column in `datasets.add_column()`. ### Motivation To specify a custom datatype, we have to use `datasets.add_column()` followed by `datasets.cast_column()` which is slow for large datasets. Another workaround is to pass a `numpy.array()` of desired type to the `datasets.add_column()` function. IMO this functionality should be natively supported. https://discuss.huggingface.co/t/add-column-with-a-particular-type-in-datasets/95674 ### Your contribution I can submit a PR for this.
closed
https://github.com/huggingface/datasets/issues/7142
2024-09-08T07:34:24
2024-09-17T03:46:32
2024-09-17T03:46:32
{ "login": "varadhbhatnagar", "id": 20443618, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,510,797,653
7,141
Older datasets throwing safety errors with 2.21.0
### Describe the bug The dataset loading was throwing some safety errors for this popular dataset `wmt14`. [in]: ``` import datasets # train_data = datasets.load_dataset("wmt14", "de-en", split="train") train_data = datasets.load_dataset("wmt14", "de-en", split="train") val_data = datasets.load_dataset("wmt14", "de-en", split="validation[:10%]") ``` [out]: ``` --------------------------------------------------------------------------- KeyError Traceback (most recent call last) [<ipython-input-9-445f0ecc4817>](https://localhost:8080/#) in <cell line: 4>() 2 3 # train_data = datasets.load_dataset("wmt14", "de-en", split="train") ----> 4 train_data = datasets.load_dataset("wmt14", "de-en", split="train") 5 val_data = datasets.load_dataset("wmt14", "de-en", split="validation[:10%]") 12 frames [/usr/local/lib/python3.10/dist-packages/huggingface_hub/hf_api.py](https://localhost:8080/#) in __init__(self, **kwargs) 636 if security is not None: 637 security = BlobSecurityInfo( --> 638 safe=security["safe"], av_scan=security["avScan"], pickle_import_scan=security["pickleImportScan"] 639 ) 640 self.security = security KeyError: 'safe' ``` ### Steps to reproduce the bug See above. ### Expected behavior Dataset properly loaded. ### Environment info version: 2.21.0
closed
https://github.com/huggingface/datasets/issues/7141
2024-09-06T16:26:30
2024-09-06T21:14:14
2024-09-06T19:09:29
{ "login": "alvations", "id": 1050316, "type": "User" }
[]
false
[]
2,508,078,858
7,139
Use load_dataset to load imagenet-1K But find a empty dataset
### Describe the bug ```python def get_dataset(data_path, train_folder="train", val_folder="val"): traindir = os.path.join(data_path, train_folder) valdir = os.path.join(data_path, val_folder) def transform_val_examples(examples): transform = Compose([ Resize(256), CenterCrop(224), ToTensor(), ]) examples["image"] = [transform(image.convert("RGB")) for image in examples["image"]] return examples def transform_train_examples(examples): transform = Compose([ RandomResizedCrop(224), RandomHorizontalFlip(), ToTensor(), ]) examples["image"] = [transform(image.convert("RGB")) for image in examples["image"]] return examples # @fengsicheng: This way is very slow for big dataset like ImageNet-1K (but can pass the network problem using local dataset) # train_set = load_dataset("imagefolder", data_dir=traindir, num_proc=4) # test_set = load_dataset("imagefolder", data_dir=valdir, num_proc=4) train_set = load_dataset("imagenet-1K", split="train", trust_remote_code=True) test_set = load_dataset("imagenet-1K", split="test", trust_remote_code=True) print(train_set["label"]) train_set.set_transform(transform_train_examples) test_set.set_transform(transform_val_examples) return train_set, test_set ``` above the code, but output of the print is a list of None: <img width="952" alt="image" src="https://github.com/user-attachments/assets/c4e2fdd8-3b8f-481e-8f86-9bbeb49d79fb"> ### Steps to reproduce the bug 1. just ran the code 2. see the print ### Expected behavior I do not know how to fix this, can anyone provide help or something? It is hurry for me ### Environment info - `datasets` version: 2.21.0 - Platform: Linux-5.4.0-190-generic-x86_64-with-glibc2.31 - Python version: 3.10.14 - `huggingface_hub` version: 0.24.6 - PyArrow version: 17.0.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.6.1
open
https://github.com/huggingface/datasets/issues/7139
2024-09-05T15:12:22
2024-10-09T04:02:41
null
{ "login": "fscdc", "id": 105094708, "type": "User" }
[]
false
[]
2,507,738,308
7,138
Cache only changed columns?
### Feature request Cache only the actual changes to the dataset i.e. changed columns. ### Motivation I realized that caching actually saves the complete dataset again. This is especially problematic for image datasets if one wants to only change another column e.g. some metadata and then has to save 5 TB again. ### Your contribution Is this even viable in the current architecture of the package? I quickly looked into it and it seems it would require significant changes. I would spend some time looking into this but maybe somebody could help with the feasibility and some plan to implement before spending too much time on it?
open
https://github.com/huggingface/datasets/issues/7138
2024-09-05T12:56:47
2024-09-20T13:27:20
null
{ "login": "Modexus", "id": 37351874, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,506,851,048
7,137
[BUG] dataset_info sequence unexpected behavior in README.md YAML
### Describe the bug When working on `dataset_info` yaml, I find my data column with format `list[dict[str, str]]` cannot be coded correctly. My data looks like ``` {"answers":[{"text": "ADDRESS", "label": "abc"}]} ``` My `dataset_info` in README.md is: ``` dataset_info: - config_name: default features: - name: answers sequence: - name: text dtype: string - name: label dtype: string ``` **Error log**: ``` pyarrow.lib.ArrowNotImplementedError: Unsupported cast from list<item: struct<text: string, label: string>> to struct using function cast_struct ``` ## Potential Reason After some analysis, it turns out that my yaml config is requiring `dict[str, list[str]]` instead of `list[dict[str, str]]`. It would work if I change my data to ``` {"answers":{"text": ["ADDRESS"], "label": ["abc", "def"]}} ``` These following 2 different `dataset_info` are actually equivalent. ``` dataset_info: - config_name: default features: - name: answers dtype: - name: text sequence: string - name: label sequence: string dataset_info: - config_name: default features: - name: answers sequence: - name: text dtype: string - name: label dtype: string ``` ### Steps to reproduce the bug ``` # README.md --- dataset_info: - config_name: default features: - name: answers sequence: - name: text dtype: string - name: label dtype: string configs: - config_name: default default: true data_files: - split: train path: - "test.jsonl" --- # test.jsonl # expected but not working {"answers":[{"text": "ADDRESS", "label": "abc"}]} # unexpected but working {"answers":{"text": ["ADDRESS"], "label": ["abc", "def"]}} ``` ### Expected behavior ``` dataset_info: - config_name: default features: - name: answers sequence: - name: text dtype: string - name: label dtype: string ``` Should work on following data format: ``` {"answers":[{"text":"ADDRESS", "label": "abc"}]} ``` ### Environment info - `datasets` version: 2.21.0 - Platform: macOS-14.6.1-arm64-arm-64bit - Python version: 3.12.4 - `huggingface_hub` version: 0.24.5 - PyArrow version: 17.0.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.6.1
closed
https://github.com/huggingface/datasets/issues/7137
2024-09-05T06:06:06
2025-07-07T09:20:29
2025-07-04T19:50:59
{ "login": "ain-soph", "id": 13214530, "type": "User" }
[]
false
[]
2,506,115,857
7,136
Do not consume unnecessary memory during sharding
When sharding `IterableDataset`s, a temporary list is created that is then indexed. There is no need to create a temporary list of a potentially very large step/world size, with standard `islice` functionality, so we avoid it. ```shell pytest tests/test_distributed.py -k iterable ``` Runs successfully.
open
https://github.com/huggingface/datasets/pull/7136
2024-09-04T19:26:06
2024-09-04T19:28:23
null
{ "login": "janEbert", "id": 12694897, "type": "User" }
[]
true
[]
2,503,318,328
7,135
Bug: Type Mismatch in Dataset Mapping
# Issue: Type Mismatch in Dataset Mapping ## Description There is an issue with the `map` function in the `datasets` library where the mapped output does not reflect the expected type change. After applying a mapping function to convert an integer label to a string, the resulting type remains an integer instead of a string. ## Reproduction Code Below is a Python script that demonstrates the problem: ```python from datasets import Dataset # Original data data = { 'text': ['Hello', 'world', 'this', 'is', 'a', 'test'], 'label': [0, 1, 0, 1, 1, 0] } # Creating a Dataset object dataset = Dataset.from_dict(data) # Mapping function to convert label to string def add_one(example): example['label'] = str(example['label']) return example # Applying the mapping function dataset = dataset.map(add_one) # Iterating over the dataset to show results for item in dataset: print(item) print(type(item['label'])) ``` ## Expected Output After applying the mapping function, the expected output should have the `label` field as strings: ```plaintext {'text': 'Hello', 'label': '0'} <class 'str'> {'text': 'world', 'label': '1'} <class 'str'> {'text': 'this', 'label': '0'} <class 'str'> {'text': 'is', 'label': '1'} <class 'str'> {'text': 'a', 'label': '1'} <class 'str'> {'text': 'test', 'label': '0'} <class 'str'> ``` ## Actual Output The actual output still shows the `label` field values as integers: ```plaintext {'text': 'Hello', 'label': 0} <class 'int'> {'text': 'world', 'label': 1} <class 'int'> {'text': 'this', 'label': 0} <class 'int'> {'text': 'is', 'label': 1} <class 'int'> {'text': 'a', 'label': 1} <class 'int'> {'text': 'test', 'label': 0} <class 'int'> ``` ## Why necessary In the case of Image process we often need to convert PIL to tensor with same column name. Thank for every dev who review this issue. πŸ€—
open
https://github.com/huggingface/datasets/issues/7135
2024-09-03T16:37:01
2024-09-05T14:09:05
null
{ "login": "marko1616", "id": 45327989, "type": "User" }
[]
false
[]
2,499,484,041
7,134
Attempting to return a rank 3 grayscale image from dataset.map results in extreme slowdown
### Describe the bug Background: Digital images are often represented as a (Height, Width, Channel) tensor. This is the same for huggingface datasets that contain images. These images are loaded in Pillow containers which offer, for example, the `.convert` method. I can convert an image from a (H,W,3) shape to a grayscale (H,W) image and I have no problems with this. But when attempting to return a (H,W,1) shaped matrix from a map function, it never completes and sometimes even results in an OOM from the OS. I've used various methods to expand a (H,W) shaped array to a (H,W,1) array. But they all resulted in extremely long map operations consuming a lot of CPU and RAM. ### Steps to reproduce the bug Below is a minimal example using two methods to get the desired output. Both of which don't work ```py import tensorflow as tf import datasets import numpy as np ds = datasets.load_dataset("project-sloth/captcha-images") to_gray_pillow = lambda sample: {'image': np.expand_dims(sample['image'].convert("L"), axis=-1)} ds_gray = ds.map(to_gray_pillow) # Alternatively ds = datasets.load_dataset("project-sloth/captcha-images").with_format("tensorflow") to_gray_tf = lambda sample: {'image': tf.expand_dims(tf.image.rgb_to_grayscale(sample['image']), axis=-1)} ds_gray = ds.map(to_gray_tf) ``` ### Expected behavior I expect the map operation to complete and return a new dataset containing grayscale images in a (H,W,1) shape. ### Environment info datasets 2.21.0 python tested with both 3.11 and 3.12 host os : linux
open
https://github.com/huggingface/datasets/issues/7134
2024-09-01T13:55:41
2024-09-02T10:34:53
null
{ "login": "navidmafi", "id": 46371349, "type": "User" }
[]
false
[]
2,496,474,495
7,133
remove filecheck to enable symlinks
Enables streaming from local symlinks #7083 @lhoestq
closed
https://github.com/huggingface/datasets/pull/7133
2024-08-30T07:36:56
2024-12-24T14:25:22
2024-12-24T14:25:22
{ "login": "fschlatt", "id": 23191892, "type": "User" }
[]
true
[]
2,494,510,464
7,132
Fix data file module inference
I saved a dataset with two splits to disk with `DatasetDict.save_to_disk`. The train is bigger and ended up in 10 shards, whereas the test split only resulted in 1 split. Now when trying to load the dataset, an error is raised that not all splits have the same data format: > ValueError: Couldn't infer the same data file format for all splits. Got {NamedSplit('train'): ('arrow', {}), NamedSplit('test'): ('json', {})} This is not expected because both splits are saved as arrow files. I did some debugging and found that this is the case because the list of data_files includes a `state.json` file. Now this means for train split I get 10 ".arrow" and 1 ".json" file. Since datasets picks based on the most common extension this is correctly inferred as "arrow". In the test split, there is 1 .arrow and 1 .json file. Given the function description: > It picks the module based on the most common file extension. In case of a draw ".parquet" is the favorite, and then alphabetical order. This is not quite true though, because in a tie the extensions are actually based on reverse-alphabetical order: ``` for (ext, _), _ in sorted(extensions_counter.items(), key=sort_key, *reverse=True*): ``` Which thus leads to the module wrongly inferred as "json", whereas it should be "arrow", matching the train split. I first thought about adding "state.json" in the list of excluded files for the inference: https://github.com/huggingface/datasets/blob/main/src/datasets/load.py#L513. However, I think from digging into the code it looks like the right thing to do is to exclude it in the list of `data_files` to start with, because it is more of a metadata than a data file.
open
https://github.com/huggingface/datasets/pull/7132
2024-08-29T13:48:16
2024-09-02T19:52:13
null
{ "login": "HennerM", "id": 1714412, "type": "User" }
[]
true
[]
2,491,942,650
7,129
Inconsistent output in documentation example: `num_classes` not displayed in `ClassLabel` output
In the documentation for [ClassLabel](https://huggingface.co/docs/datasets/v2.21.0/en/package_reference/main_classes#datasets.ClassLabel), there is an example of usage with the following code: ```` from datasets import Features features = Features({'label': ClassLabel(num_classes=3, names=['bad', 'ok', 'good'])}) features ```` which expects to output (as stated in the documentation): ```` {'label': ClassLabel(num_classes=3, names=['bad', 'ok', 'good'], id=None)} ```` but it generates the following ```` {'label': ClassLabel(names=['bad', 'ok', 'good'], id=None)} ```` If my understanding is correct, this happens because although num_classes is used during the init of the object, it is afterward ignored: https://github.com/huggingface/datasets/blob/be5cff059a2a5b89d7a97bc04739c4919ab8089f/src/datasets/features/features.py#L975 I would like to work on this issue if this is something needed πŸ˜„
closed
https://github.com/huggingface/datasets/issues/7129
2024-08-28T12:27:48
2024-12-06T11:32:02
2024-12-06T11:32:02
{ "login": "sergiopaniego", "id": 17179696, "type": "User" }
[]
false
[]
2,490,274,775
7,128
Filter Large Dataset Entry by Entry
### Feature request I am not sure if this is a new feature, but I wanted to post this problem here, and hear if others have ways of optimizing and speeding up this process. Let's say I have a really large dataset that I cannot load into memory. At this point, I am only aware of `streaming=True` to load the dataset. Now, the dataset consists of many tables. Ideally, I would want to have some simple filtering criterion, such that I only see the "good" tables. Here is an example of what the code might look like: ``` dataset = load_dataset( "really-large-dataset", streaming=True ) # And let's say we process the dataset bit by bit because we want intermediate results dataset = islice(dataset, 10000) # Define a function to filter the data def filter_function(table): if some_condition: return True else: return False # Use the filter function on your dataset filtered_dataset = (ex for ex in dataset if filter_function(ex)) ``` And then I work on the processed dataset, which would be magnitudes faster than working on the original. I would love to hear if the problem setup + solution makes sense to people, and if anyone has suggestions! ### Motivation See description above ### Your contribution Happy to make PR if this is a new feature
open
https://github.com/huggingface/datasets/issues/7128
2024-08-27T20:31:09
2024-10-07T23:37:44
null
{ "login": "QiyaoWei", "id": 36057290, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,486,524,966
7,127
Caching shuffles by np.random.Generator results in unintiutive behavior
### Describe the bug Create a dataset. Save it to disk. Load from disk. Shuffle, usning a `np.random.Generator`. Iterate. Shuffle again. Iterate. The iterates are different since the supplied np.random.Generator has progressed between the shuffles. Load dataset from disk again. Shuffle and Iterate. See same result as before. Shuffle and iterate, and this time it does not have the same shuffling as ion previous run. The motivation is I have a deep learning loop with ``` for epoch in range(10): for batch in dataset.shuffle(generator=generator).iter(batch_size=32): .... # do stuff ``` where I want a new shuffling at every epoch. Instead I get the same shuffling. ### Steps to reproduce the bug Run the code below two times. ```python import datasets import numpy as np generator = np.random.default_rng(0) ds = datasets.Dataset.from_dict(mapping={"X":range(1000)}) ds.save_to_disk("tmp") print("First loop: ", end="") for _ in range(10): print(next(ds.shuffle(generator=generator).iter(batch_size=1))['X'], end=", ") print("") print("Second loop: ", end="") ds = datasets.Dataset.load_from_disk("tmp") for _ in range(10): print(next(ds.shuffle(generator=generator).iter(batch_size=1))['X'], end=", ") print("") ``` The output is: ``` $ python main.py Saving the dataset (1/1 shards): 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1000/1000 [00:00<00:00, 495019.95 examples/s] First loop: 459, 739, 72, 943, 241, 181, 845, 830, 896, 334, Second loop: 741, 847, 944, 795, 483, 842, 717, 865, 231, 840, $ python main.py Saving the dataset (1/1 shards): 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1000/1000 [00:00<00:00, 22243.40 examples/s] First loop: 459, 739, 72, 943, 241, 181, 845, 830, 896, 334, Second loop: 741, 741, 741, 741, 741, 741, 741, 741, 741, 741, ``` The second loop, on the second run, only spits out "741, 741, 741...." which is *not* the desired output ### Expected behavior I want the dataset to shuffle at every epoch since I provide it with a generator for shuffling. ### Environment info Datasets version 2.21.0 Ubuntu linux.
open
https://github.com/huggingface/datasets/issues/7127
2024-08-26T10:29:48
2025-03-10T17:12:57
null
{ "login": "el-hult", "id": 11832922, "type": "User" }
[]
false
[]
2,485,939,495
7,126
Disable implicit token in CI
Disable implicit token in CI. This PR allows running CI tests locally without implicitly using the local user HF token. For example, run locally the tests in: - #7124
closed
https://github.com/huggingface/datasets/pull/7126
2024-08-26T05:29:46
2024-08-26T06:05:01
2024-08-26T05:59:15
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,485,912,246
7,125
Fix wrong SHA in CI tests of HubDatasetModuleFactoryWithParquetExport
Fix wrong SHA in CI tests of HubDatasetModuleFactoryWithParquetExport.
closed
https://github.com/huggingface/datasets/pull/7125
2024-08-26T05:09:35
2024-08-26T05:33:15
2024-08-26T05:27:09
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,485,890,442
7,124
Test get_dataset_config_info with non-existing/gated/private dataset
Test get_dataset_config_info with non-existing/gated/private dataset. Related to: - #7109 See also: - https://github.com/huggingface/dataset-viewer/pull/3037: https://github.com/huggingface/dataset-viewer/pull/3037/commits/bb1a7e00c53c242088597cab6572e4fd57797ecb
closed
https://github.com/huggingface/datasets/pull/7124
2024-08-26T04:53:59
2024-08-26T06:15:33
2024-08-26T06:09:42
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,484,003,937
7,123
Make dataset viewer more flexible in displaying metadata alongside images
### Feature request To display images with their associated metadata in the dataset viewer, a `metadata.csv` file is required. In the case of a dataset with multiple subsets, this would require the CSVs to be contained in the same folder as the images since they all need to be named `metadata.csv`. The request is that this be made more flexible for datasets with multiple subsets to avoid the need to put a `metadata.csv` into each image directory where they are not as easily accessed. ### Motivation When creating datasets with multiple subsets I can't get the images to display alongside their associated metadata (it's usually one or the other that will show up). Since this requires a file specifically named `metadata.csv`, I then have to place that file within the image directory, which makes it much more difficult to access. Additionally, it still doesn't necessarily display the images alongside their metadata correctly (see, for instance, [this discussion](https://huggingface.co/datasets/imageomics/2018-NEON-beetles/discussions/8)). It was suggested I bring this discussion to GitHub on another dataset struggling with a similar issue ([discussion](https://huggingface.co/datasets/imageomics/fish-vista/discussions/4)). In that case, it's a mix of data subsets, where some just reference the image URLs, while others actually have the images uploaded. The ones with images uploaded are not displaying images, but renaming that file to just `metadata.csv` would diminish the clarity of the construction of the dataset itself (and I'm not entirely convinced it would solve the issue). ### Your contribution I can make a suggestion for one approach to address the issue: For instance, even if it could just end in `_metadata.csv` or `-metadata.csv`, that would be very helpful to allow for more flexibility of dataset structure without impacting clarity. I would think that the functionality on the backend looking for `metadata.csv` could reasonably be adapted to look for such an ending on a filename (maybe also check that it has a `file_name` column?). Presumably, requiring the `configs` in a setup like on [this dataset](https://huggingface.co/datasets/imageomics/rare-species/blob/main/README.md) could also help in figuring out how it should work? ``` configs: - config_name: <image subset> data_files: - <image-metadata>.csv - <path/to/images>/*.jpg ``` I'd also be happy to look at whatever solution is decided upon and contribute to the ideation. Thanks for your time and consideration! The dataset viewer really is fabulous when it works :)
open
https://github.com/huggingface/datasets/issues/7123
2024-08-23T22:56:01
2024-10-17T09:13:47
null
{ "login": "egrace479", "id": 38985481, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,482,491,258
7,122
[interleave_dataset] sample batches from a single source at a time
### Feature request interleave_dataset and [RandomlyCyclingMultiSourcesExamplesIterable](https://github.com/huggingface/datasets/blob/3813ce846e52824b38e53895810682f0a496a2e3/src/datasets/iterable_dataset.py#L816) enable us to sample data examples from different sources. But can we also sample batches in a similar manner (each batch only contains data from a single source)? ### Motivation Some recent research [[1](https://blog.salesforceairesearch.com/sfr-embedded-mistral/), [2](https://arxiv.org/pdf/2310.07554)] shows that source homogenous batching can be helpful for contrastive learning. Can we add a function called `RandomlyCyclingMultiSourcesBatchesIterable` to support this functionality? ### Your contribution I can contribute a PR. But I wonder what the best way is to test its correctness and robustness.
open
https://github.com/huggingface/datasets/issues/7122
2024-08-23T07:21:15
2024-08-23T07:21:15
null
{ "login": "memray", "id": 4197249, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,480,978,483
7,121
Fix typed examples iterable state dict
fix https://github.com/huggingface/datasets/issues/7085 as noted by @VeryLazyBoy and reported by @AjayP13
closed
https://github.com/huggingface/datasets/pull/7121
2024-08-22T14:45:03
2024-08-22T14:54:56
2024-08-22T14:49:06
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,480,674,237
7,120
don't mention the script if trust_remote_code=False
See https://huggingface.co/datasets/Omega02gdfdd/bioclip-demo-zero-shot-mistakes for example. The error is: ``` FileNotFoundError: Couldn't find a dataset script at /src/services/worker/Omega02gdfdd/bioclip-demo-zero-shot-mistakes/bioclip-demo-zero-shot-mistakes.py or any data file in the same directory. Couldn't find 'Omega02gdfdd/bioclip-demo-zero-shot-mistakes' on the Hugging Face Hub either: FileNotFoundError: Unable to find 'hf://datasets/Omega02gdfdd/bioclip-demo-zero-shot-mistakes@12b0313ba4c3189ee5a24cb76200959e9bf7492e/data.csv' with any supported extension ['.csv', '.tsv', '.json', '.jsonl', '.parquet', '.geoparquet', '.gpq', '.arrow', '.txt', '.tar', '.blp', '.bmp', '.dib', '.bufr', '.cur', '.pcx', '.dcx', '.dds', '.ps', '.eps', '.fit', '.fits', '.fli', '.flc', '.ftc', '.ftu', '.gbr', '.gif', '.grib', '.h5', '.hdf', '.png', '.apng', '.jp2', '.j2k', '.jpc', '.jpf', '.jpx', '.j2c', '.icns', '.ico', '.im', '.iim', '.tif', '.tiff', '.jfif', '.jpe', '.jpg', '.jpeg', '.mpg', '.mpeg', '.msp', '.pcd', '.pxr', '.pbm', '.pgm', '.ppm', '.pnm', '.psd', '.bw', '.rgb', '.rgba', '.sgi', '.ras', '.tga', '.icb', '.vda', '.vst', '.webp', '.wmf', '.emf', '.xbm', '.xpm', '.BLP', '.BMP', '.DIB', '.BUFR', '.CUR', '.PCX', '.DCX', '.DDS', '.PS', '.EPS', '.FIT', '.FITS', '.FLI', '.FLC', '.FTC', '.FTU', '.GBR', '.GIF', '.GRIB', '.H5', '.HDF', '.PNG', '.APNG', '.JP2', '.J2K', '.JPC', '.JPF', '.JPX', '.J2C', '.ICNS', '.ICO', '.IM', '.IIM', '.TIF', '.TIFF', '.JFIF', '.JPE', '.JPG', '.JPEG', '.MPG', '.MPEG', '.MSP', '.PCD', '.PXR', '.PBM', '.PGM', '.PPM', '.PNM', '.PSD', '.BW', '.RGB', '.RGBA', '.SGI', '.RAS', '.TGA', '.ICB', '.VDA', '.VST', '.WEBP', '.WMF', '.EMF', '.XBM', '.XPM', '.aiff', '.au', '.avr', '.caf', '.flac', '.htk', '.svx', '.mat4', '.mat5', '.mpc2k', '.ogg', '.paf', '.pvf', '.raw', '.rf64', '.sd2', '.sds', '.ircam', '.voc', '.w64', '.wav', '.nist', '.wavex', '.wve', '.xi', '.mp3', '.opus', '.AIFF', '.AU', '.AVR', '.CAF', '.FLAC', '.HTK', '.SVX', '.MAT4', '.MAT5', '.MPC2K', '.OGG', '.PAF', '.PVF', '.RAW', '.RF64', '.SD2', '.SDS', '.IRCAM', '.VOC', '.W64', '.WAV', '.NIST', '.WAVEX', '.WVE', '.XI', '.MP3', '.OPUS', '.zip'] ``` The issue there is that a `configs` parameter is set in the README, while the mentioned data file (`data.csv`) does not exist.
closed
https://github.com/huggingface/datasets/pull/7120
2024-08-22T12:32:32
2024-08-22T14:39:52
2024-08-22T14:33:52
{ "login": "severo", "id": 1676121, "type": "User" }
[]
true
[]
2,477,766,493
7,119
Install transformers with numpy-2 CI
Install transformers with numpy-2 CI. Note that transformers no longer pins numpy < 2 since transformers-4.43.0: - https://github.com/huggingface/transformers/pull/32018 - https://github.com/huggingface/transformers/releases/tag/v4.43.0
closed
https://github.com/huggingface/datasets/pull/7119
2024-08-21T11:14:59
2024-08-21T11:42:35
2024-08-21T11:36:50
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,477,676,893
7,118
Allow numpy-2.1 and test it without audio extra
Allow numpy-2.1 and test it without audio extra. This PR reverts: - #7114 Note that audio extra tests can be included again with numpy-2.1 once next numba-0.61.0 version is released.
closed
https://github.com/huggingface/datasets/pull/7118
2024-08-21T10:29:35
2024-08-21T11:05:03
2024-08-21T10:58:15
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,476,555,659
7,117
Audio dataset load everything in RAM and is very slow
Hello, I'm working with an audio dataset. I want to transcribe the audio that the dataset contain, and for that I use whisper. My issue is that the dataset load everything in the RAM when I map the dataset, obviously, when RAM usage is too high, the program crashes. To fix this issue, I'm using writer_batch_size that I set to 10, but in this case, the mapping of the dataset is extremely slow. To illustrate this, on 50 examples, with `writer_batch_size` set to 10, it takes 123.24 seconds to process the dataset, but without `writer_batch_size` set to 10, it takes about ten seconds to process the dataset, but then the process remains blocked (I assume that it is writing the dataset and therefore suffers from the same problem as `writer_batch_size`) ### Steps to reproduce the bug Hug ram usage but fast (but actually slow when saving the dataset): ```py from datasets import load_dataset import time ds = load_dataset("WaveGenAI/audios2", split="train[:50]") # map the dataset def transcribe_audio(row): audio = row["audio"] # get the audio but do nothing with it row["transcribed"] = True return row time1 = time.time() ds = ds.map( transcribe_audio ) for row in ds: pass # do nothing, just iterate to trigger the map function print(f"Time taken: {time.time() - time1:.2f} seconds") ``` Low ram usage but very very slow: ```py from datasets import load_dataset import time ds = load_dataset("WaveGenAI/audios2", split="train[:50]") # map the dataset def transcribe_audio(row): audio = row["audio"] # get the audio but do nothing with it row["transcribed"] = True return row time1 = time.time() ds = ds.map( transcribe_audio, writer_batch_size=10 ) # set low writer_batch_size to avoid memory issues for row in ds: pass # do nothing, just iterate to trigger the map function print(f"Time taken: {time.time() - time1:.2f} seconds") ``` ### Expected behavior I think the processing should be much faster, on only 50 audio examples, the mapping takes several minutes while nothing is done (just loading the audio). ### Environment info - `datasets` version: 2.21.0 - Platform: Linux-6.10.5-arch1-1-x86_64-with-glibc2.40 - Python version: 3.10.4 - `huggingface_hub` version: 0.24.5 - PyArrow version: 17.0.0 - Pandas version: 1.5.3 - `fsspec` version: 2024.6.1 # Extra The dataset has been generated by using audio folder, so I don't think anything specific in my code is causing this problem. ```py import argparse from datasets import load_dataset parser = argparse.ArgumentParser() parser.add_argument("--folder", help="folder path", default="/media/works/test/") args = parser.parse_args() dataset = load_dataset("audiofolder", data_dir=args.folder) # push the dataset to hub dataset.push_to_hub("WaveGenAI/audios") ``` Also, it's the combination of `audio = row["audio"]` and `row["transcribed"] = True` which causes problems, `row["transcribed"] = True `alone does nothing and `audio = row["audio"]` alone sometimes causes problems, sometimes not.
open
https://github.com/huggingface/datasets/issues/7117
2024-08-20T21:18:12
2024-08-26T13:11:55
null
{ "login": "Jourdelune", "id": 64205064, "type": "User" }
[]
false
[]
2,475,522,721
7,116
datasets cannot handle nested json if features is given.
### Describe the bug I have a json named temp.json. ```json {"ref1": "ABC", "ref2": "DEF", "cuts":[{"cut1": 3, "cut2": 5}]} ``` I want to load it. ```python ds = datasets.load_dataset('json', data_files="./temp.json", features=datasets.Features({ 'ref1': datasets.Value('string'), 'ref2': datasets.Value('string'), 'cuts': datasets.Sequence({ "cut1": datasets.Value("uint16"), "cut2": datasets.Value("uint16") }) })) ``` The above code does not work. However, I can load it without giving features. ```python ds = datasets.load_dataset('json', data_files="./temp.json") ``` Is it possible to load integers as uint16 to save some memory? ### Steps to reproduce the bug As in the bug description. ### Expected behavior The data are loaded and integers are uint16. ### Environment info Copy-and-paste the text below in your GitHub issue. - `datasets` version: 2.21.0 - Platform: Linux-5.15.0-118-generic-x86_64-with-glibc2.35 - Python version: 3.11.9 - `huggingface_hub` version: 0.24.5 - PyArrow version: 17.0.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.5.0
closed
https://github.com/huggingface/datasets/issues/7116
2024-08-20T12:27:49
2024-09-03T10:18:23
2024-09-03T10:18:07
{ "login": "ljw20180420", "id": 38550511, "type": "User" }
[]
false
[]
2,475,363,142
7,115
module 'pyarrow.lib' has no attribute 'ListViewType'
### Describe the bug Code: `!pipuninstall -y pyarrow !pip install --no-cache-dir pyarrow !pip uninstall -y pyarrow !pip install pyarrow --no-cache-dir !pip install --upgrade datasets transformers pyarrow !pip install pyarrow.parquet ! pip install pyarrow-core libparquet !pip install pyarrow --no-cache-dir !pip install pyarrow !pip install transformers !pip install --upgrade datasets !pip install datasets ! pip install pyarrow ! pip install pyarrow.lib ! pip install pyarrow.parquet !pip install transformers import pyarrow as pa print(pa.__version__) from datasets import load_dataset import pyarrow.parquet as pq import pyarrow.lib as lib import pandas as pd from transformers import AutoTokenizer, AutoModelForSequenceClassification, Trainer, TrainingArguments from datasets import load_dataset from transformers import AutoTokenizer ! pip install pyarrow-core libparquet # Load the dataset for content moderation dataset = load_dataset("PolyAI/banking77") # Example dataset for customer support # Initialize the tokenizer tokenizer = AutoTokenizer.from_pretrained("facebook/opt-350m") # Tokenize the dataset def tokenize_function(examples): return tokenizer(examples['text'], padding="max_length", truncation=True) # Apply tokenization to the entire dataset tokenized_datasets = dataset.map(tokenize_function, batched=True) # Check the first few tokenized samples print(tokenized_datasets['train'][0]) from transformers import AutoModelForSequenceClassification, Trainer, TrainingArguments # Load the model model = AutoModelForSequenceClassification.from_pretrained("facebook/opt-350m", num_labels=77) # Define training arguments training_args = TrainingArguments( output_dir="./results", per_device_train_batch_size=16, per_device_eval_batch_size=16, num_train_epochs=3, eval_strategy="epoch", # save_strategy="epoch", logging_dir="./logs", learning_rate=2e-5, ) # Initialize the Trainer trainer = Trainer( model=model, args=training_args, train_dataset=tokenized_datasets["train"], eval_dataset=tokenized_datasets["test"], ) # Train the model trainer.train() # Evaluate the model trainer.evaluate() ` AttributeError Traceback (most recent call last) [<ipython-input-23-60bed3143a93>](https://localhost:8080/#) in <cell line: 22>() 20 21 ---> 22 from datasets import load_dataset 23 import pyarrow.parquet as pq 24 import pyarrow.lib as lib 5 frames [/usr/local/lib/python3.10/dist-packages/datasets/__init__.py](https://localhost:8080/#) in <module> 15 __version__ = "2.21.0" 16 ---> 17 from .arrow_dataset import Dataset 18 from .arrow_reader import ReadInstruction 19 from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder [/usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in <module> 74 75 from . import config ---> 76 from .arrow_reader import ArrowReader 77 from .arrow_writer import ArrowWriter, OptimizedTypedSequence 78 from .data_files import sanitize_patterns [/usr/local/lib/python3.10/dist-packages/datasets/arrow_reader.py](https://localhost:8080/#) in <module> 27 28 import pyarrow as pa ---> 29 import pyarrow.parquet as pq 30 from tqdm.contrib.concurrent import thread_map 31 [/usr/local/lib/python3.10/dist-packages/pyarrow/parquet/__init__.py](https://localhost:8080/#) in <module> 18 # flake8: noqa 19 ---> 20 from .core import * [/usr/local/lib/python3.10/dist-packages/pyarrow/parquet/core.py](https://localhost:8080/#) in <module> 31 32 try: ---> 33 import pyarrow._parquet as _parquet 34 except ImportError as exc: 35 raise ImportError( /usr/local/lib/python3.10/dist-packages/pyarrow/_parquet.pyx in init pyarrow._parquet() AttributeError: module 'pyarrow.lib' has no attribute 'ListViewType' ### Steps to reproduce the bug https://colab.research.google.com/drive/1HNbsg3tHxUJOHVtYIaRnNGY4T2PnLn4a?usp=sharing ### Expected behavior Looks like there is an issue with datasets and pyarrow ### Environment info google colab python huggingface Found existing installation: pyarrow 17.0.0 Uninstalling pyarrow-17.0.0: Successfully uninstalled pyarrow-17.0.0 Collecting pyarrow Downloading pyarrow-17.0.0-cp310-cp310-manylinux_2_28_x86_64.whl.metadata (3.3 kB) Requirement already satisfied: numpy>=1.16.6 in /usr/local/lib/python3.10/dist-packages (from pyarrow) (1.26.4) Downloading pyarrow-17.0.0-cp310-cp310-manylinux_2_28_x86_64.whl (39.9 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 39.9/39.9 MB 188.9 MB/s eta 0:00:00 Installing collected packages: pyarrow ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. cudf-cu12 24.4.1 requires pyarrow<15.0.0a0,>=14.0.1, but you have pyarrow 17.0.0 which is incompatible. ibis-framework 8.0.0 requires pyarrow<16,>=2, but you have pyarrow 17.0.0 which is incompatible. Successfully installed pyarrow-17.0.0 WARNING: The following packages were previously imported in this runtime: [pyarrow] You must restart the runtime in order to use newly installed versions.
closed
https://github.com/huggingface/datasets/issues/7115
2024-08-20T11:05:44
2024-09-10T06:51:08
2024-09-10T06:51:08
{ "login": "neurafusionai", "id": 175128880, "type": "User" }
[]
false
[]
2,475,062,252
7,114
Temporarily pin numpy<2.1 to fix CI
Temporarily pin numpy<2.1 to fix CI. Fix #7111.
closed
https://github.com/huggingface/datasets/pull/7114
2024-08-20T08:42:57
2024-08-20T09:09:27
2024-08-20T09:02:35
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,475,029,640
7,113
Stream dataset does not iterate if the batch size is larger than the dataset size (related to drop_last_batch)
### Describe the bug Hi there, I use streaming and interleaving to combine multiple datasets saved in jsonl files. The size of dataset can vary (from 100ish to 100k-ish). I use dataset.map() and a big batch size to reduce the IO cost. It was working fine with datasets-2.16.1 but this problem shows up after I upgraded to datasets-2.19.2. With 2.21.0 the problem remains. Please see the code below to reproduce the problem. The dataset can iterate correctly if we set either streaming=False or drop_last_batch=False. I have to use drop_last_batch=True since it's for distributed training. ### Steps to reproduce the bug ```python # datasets==2.21.0 import datasets def data_prepare(examples): print(examples["sentence1"][0]) return examples batch_size = 101 # the size of the dataset is 100 # the dataset iterates correctly if we set either streaming=False or drop_last_batch=False dataset = datasets.load_dataset("mteb/biosses-sts", split="test", streaming=True) dataset = dataset.map(lambda x: data_prepare(x), drop_last_batch=True, batched=True, batch_size=batch_size) for ex in dataset: print(ex) pass ``` ### Expected behavior The dataset iterates regardless of the batch size. ### Environment info - `datasets` version: 2.21.0 - Platform: Linux-6.1.58+-x86_64-with-glibc2.35 - Python version: 3.10.14 - `huggingface_hub` version: 0.24.5 - PyArrow version: 17.0.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.2.0
closed
https://github.com/huggingface/datasets/issues/7113
2024-08-20T08:26:40
2024-08-26T04:24:11
2024-08-26T04:24:10
{ "login": "memray", "id": 4197249, "type": "User" }
[]
false
[]
2,475,004,644
7,112
cudf-cu12 24.4.1, ibis-framework 8.0.0 requires pyarrow<15.0.0a0,>=14.0.1,pyarrow<16,>=2 and datasets 2.21.0 requires pyarrow>=15.0.0
### Describe the bug !pip install accelerate>=0.16.0 torchvision transformers>=4.25.1 datasets>=2.19.1 ftfy tensorboard Jinja2 peft==0.7.0 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. cudf-cu12 24.4.1 requires pyarrow<15.0.0a0,>=14.0.1, but you have pyarrow 17.0.0 which is incompatible. ibis-framework 8.0.0 requires pyarrow<16,>=2, but you have pyarrow 17.0.0 which is incompatible. to solve above error !pip install pyarrow==14.0.1 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. datasets 2.21.0 requires pyarrow>=15.0.0, but you have pyarrow 14.0.1 which is incompatible. ### Steps to reproduce the bug !pip install datasets>=2.19.1 ### Expected behavior run without dependency error ### Environment info Diffusers version: 0.31.0.dev0 Platform: Linux-6.1.85+-x86_64-with-glibc2.35 Running on Google Colab?: Yes Python version: 3.10.12 PyTorch version (GPU?): 2.3.1+cu121 (True) Flax version (CPU?/GPU?/TPU?): 0.8.4 (gpu) Jax version: 0.4.26 JaxLib version: 0.4.26 Huggingface_hub version: 0.23.5 Transformers version: 4.42.4 Accelerate version: 0.32.1 PEFT version: 0.7.0 Bitsandbytes version: not installed Safetensors version: 0.4.4 xFormers version: not installed Accelerator: Tesla T4, 15360 MiB Using GPU in script?: Using distributed or parallel set-up in script?:
open
https://github.com/huggingface/datasets/issues/7112
2024-08-20T08:13:55
2024-09-20T15:30:03
null
{ "login": "SoumyaMB10", "id": 174590283, "type": "User" }
[]
false
[]
2,474,915,845
7,111
CI is broken for numpy-2: Failed to fetch wheel: llvmlite==0.34.0
Ci is broken with error `Failed to fetch wheel: llvmlite==0.34.0`: https://github.com/huggingface/datasets/actions/runs/10466825281/job/28984414269 ``` Run uv pip install --system "datasets[tests_numpy2] @ ." Resolved 150 packages in 4.42s error: Failed to prepare distributions Caused by: Failed to fetch wheel: llvmlite==0.34.0 Caused by: Build backend failed to build wheel through `build_wheel()` with exit status: 1 --- stdout: running bdist_wheel /home/runner/.cache/uv/builds-v0/.tmpcyKh8S/bin/python /home/runner/.cache/uv/built-wheels-v3/pypi/llvmlite/0.34.0/wrk1bNwq1gleSiznvrSEZ/llvmlite-0.34.0.tar.gz/ffi/build.py LLVM version... --- stderr: Traceback (most recent call last): File "/home/runner/.cache/uv/built-wheels-v3/pypi/llvmlite/0.34.0/wrk1bNwq1gleSiznvrSEZ/llvmlite-0.34.0.tar.gz/ffi/build.py", line 105, in main_posix out = subprocess.check_output([llvm_config, '--version']) File "/opt/hostedtoolcache/Python/3.10.14/x64/lib/python3.10/subprocess.py", line 421, in check_output return run(*popenargs, stdout=PIPE, timeout=timeout, check=True, File "/opt/hostedtoolcache/Python/3.10.14/x64/lib/python3.10/subprocess.py", line 503, in run with Popen(*popenargs, **kwargs) as process: File "/opt/hostedtoolcache/Python/3.10.14/x64/lib/python3.10/subprocess.py", line 971, in __init__ self._execute_child(args, executable, preexec_fn, close_fds, File "/opt/hostedtoolcache/Python/3.10.14/x64/lib/python3.10/subprocess.py", line 1863, in _execute_child raise child_exception_type(errno_num, err_msg, err_filename) FileNotFoundError: [Errno 2] No such file or directory: 'llvm-config' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/runner/.cache/uv/built-wheels-v3/pypi/llvmlite/0.34.0/wrk1bNwq1gleSiznvrSEZ/llvmlite-0.34.0.tar.gz/ffi/build.py", line 191, in <module> main() File "/home/runner/.cache/uv/built-wheels-v3/pypi/llvmlite/0.34.0/wrk1bNwq1gleSiznvrSEZ/llvmlite-0.34.0.tar.gz/ffi/build.py", line 181, in main main_posix('linux', '.so') File "/home/runner/.cache/uv/built-wheels-v3/pypi/llvmlite/0.34.0/wrk1bNwq1gleSiznvrSEZ/llvmlite-0.34.0.tar.gz/ffi/build.py", line 107, in main_posix raise RuntimeError("%s failed executing, please point LLVM_CONFIG " RuntimeError: llvm-config failed executing, please point LLVM_CONFIG to the path for llvm-config error: command '/home/runner/.cache/uv/builds-v0/.tmpcyKh8S/bin/python' failed with exit code 1 ```
closed
https://github.com/huggingface/datasets/issues/7111
2024-08-20T07:27:28
2024-08-21T05:05:36
2024-08-20T09:02:36
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
false
[]
2,474,747,695
7,110
Fix ConnectionError for gated datasets and unauthenticated users
Fix `ConnectionError` for gated datasets and unauthenticated users. See: - https://github.com/huggingface/dataset-viewer/issues/3025 Note that a recent change in the Hub returns dataset info for gated datasets and unauthenticated users, instead of raising a `GatedRepoError` as before. See: - https://github.com/huggingface/huggingface_hub/issues/2457 This PR adds an additional check (/auth-check) for gated datasets and raises `DatasetNotFoundError` for unauthenticated users, as it was the case before the change in the Hub. - Fix suggested by @Pierrci (thanks @Wauplin for pointing it out). Fix #7109.
closed
https://github.com/huggingface/datasets/pull/7110
2024-08-20T05:26:54
2024-08-20T15:11:35
2024-08-20T09:14:35
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,473,367,848
7,109
ConnectionError for gated datasets and unauthenticated users
Since the Hub returns dataset info for gated datasets and unauthenticated users, there is dead code: https://github.com/huggingface/datasets/blob/98fdc9e78e6d057ca66e58a37f49d6618aab8130/src/datasets/load.py#L1846-L1852 We should remove the dead code and properly handle this case: currently we are raising a `ConnectionError` instead of a `DatasetNotFoundError` (as before). See: - https://github.com/huggingface/dataset-viewer/issues/3025 - https://github.com/huggingface/huggingface_hub/issues/2457
closed
https://github.com/huggingface/datasets/issues/7109
2024-08-19T13:27:45
2024-08-20T09:14:36
2024-08-20T09:14:35
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
false
[]
2,470,665,327
7,108
website broken: Create a new dataset repository, doesn't create a new repo in Firefox
### Describe the bug This issue is also reported here: https://discuss.huggingface.co/t/create-a-new-dataset-repository-broken-page/102644 This page is broken. https://huggingface.co/new-dataset I fill in the form with my text, and click `Create Dataset`. ![Screenshot 2024-08-16 at 15 55 37](https://github.com/user-attachments/assets/de16627b-7a55-4bcf-9f0b-a48227aabfe6) Then the form gets wiped. And no repo got created. No error message visible in the developer console. ![Screenshot 2024-08-16 at 15 56 54](https://github.com/user-attachments/assets/0520164b-431c-40a5-9634-11fd62c4f4c3) # Idea for improvement For better UX, if the repo cannot be created, then show an error message, that something went wrong. # Work around, that works for me ```python from huggingface_hub import HfApi, HfFolder repo_id = 'simon-arc-solve-fractal-v3' api = HfApi() username = api.whoami()['name'] repo_url = api.create_repo(repo_id=repo_id, exist_ok=True, private=True, repo_type="dataset") ``` ### Steps to reproduce the bug Go https://huggingface.co/new-dataset Fill in the form. Click `Create dataset`. Now the form is cleared. And the page doesn't jump anywhere. ### Expected behavior The moment the user clicks `Create dataset`, the repo gets created and the page jumps to the created repo. ### Environment info Firefox 128.0.3 (64-bit) macOS Sonoma 14.5
closed
https://github.com/huggingface/datasets/issues/7108
2024-08-16T17:23:00
2024-08-19T13:21:12
2024-08-19T06:52:48
{ "login": "neoneye", "id": 147971, "type": "User" }
[]
false
[]
2,470,444,732
7,107
load_dataset broken in 2.21.0
### Describe the bug `eval_set = datasets.load_dataset("tatsu-lab/alpaca_eval", "alpaca_eval_gpt4_baseline", trust_remote_code=True)` used to work till 2.20.0 but doesn't work in 2.21.0 In 2.20.0: ![Screenshot 2024-08-16 at 3 57 10β€―PM](https://github.com/user-attachments/assets/0516489b-8187-486d-bee8-88af3381dee9) in 2.21.0: ![Screenshot 2024-08-16 at 3 57 24β€―PM](https://github.com/user-attachments/assets/bc257570-f461-41e4-8717-90a69ed7c24f) ### Steps to reproduce the bug 1. Spin up a new google collab 2. `pip install datasets==2.21.0` 3. `import datasets` 4. `eval_set = datasets.load_dataset("tatsu-lab/alpaca_eval", "alpaca_eval_gpt4_baseline", trust_remote_code=True)` 5. Will throw an error. ### Expected behavior Try steps 1-5 again but replace datasets version with 2.20.0, it will work ### Environment info - `datasets` version: 2.21.0 - Platform: Linux-6.1.85+-x86_64-with-glibc2.35 - Python version: 3.10.12 - `huggingface_hub` version: 0.23.5 - PyArrow version: 17.0.0 - Pandas version: 2.1.4 - `fsspec` version: 2024.5.0
closed
https://github.com/huggingface/datasets/issues/7107
2024-08-16T14:59:51
2024-08-18T09:28:43
2024-08-18T09:27:12
{ "login": "anjor", "id": 1911631, "type": "User" }
[]
false
[]
2,469,854,262
7,106
Rename LargeList.dtype to LargeList.feature
Rename `LargeList.dtype` to `LargeList.feature`. Note that `dtype` is usually used for NumPy data types ("int64", "float32",...): see `Value.dtype`. However, `LargeList` attribute (like `Sequence.feature`) expects a `FeatureType` instead. With this renaming: - we avoid confusion about the expected type and - we also align `LargeList` with `Sequence`.
closed
https://github.com/huggingface/datasets/pull/7106
2024-08-16T09:12:04
2024-08-26T04:31:59
2024-08-26T04:26:02
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,468,207,039
7,105
Use `huggingface_hub` cache
- use `hf_hub_download()` from `huggingface_hub` for HF files - `datasets` cache_dir is still used for: - caching datasets as Arrow files (that back `Dataset` objects) - extracted archives, uncompressed files - files downloaded via http (datasets with scripts) - I removed code that were made for http files (and also the dummy_data / mock_download_manager stuff that happened to rely on them and have been legacy for a while now)
closed
https://github.com/huggingface/datasets/pull/7105
2024-08-15T14:45:22
2024-09-12T04:36:08
2024-08-21T15:47:16
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,467,788,212
7,104
remove more script docs
null
closed
https://github.com/huggingface/datasets/pull/7104
2024-08-15T10:13:26
2024-08-15T10:24:13
2024-08-15T10:18:25
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
2,467,664,581
7,103
Fix args of feature docstrings
Fix Args section of feature docstrings. Currently, some args do not appear in the docs because they are not properly parsed due to the lack of their type (between parentheses).
closed
https://github.com/huggingface/datasets/pull/7103
2024-08-15T08:46:08
2024-08-16T09:18:29
2024-08-15T10:33:30
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,466,893,106
7,102
Slow iteration speeds when using IterableDataset.shuffle with load_dataset(data_files=..., streaming=True)
### Describe the bug When I load a dataset from a number of arrow files, as in: ``` random_dataset = load_dataset( "arrow", data_files={split: shard_filepaths}, streaming=True, split=split, ) ``` I'm able to get fast iteration speeds when iterating over the dataset without shuffling. When I shuffle the dataset, the iteration speed is reduced by ~1000x. It's very possible the way I'm loading dataset shards is not appropriate; if so please advise! Thanks for the help ### Steps to reproduce the bug Here's full code to reproduce the issue: - Generate a random dataset - Create shards of data independently using Dataset.save_to_disk() - The below will generate 16 shards (arrow files), of 512 examples each ``` import time from pathlib import Path from multiprocessing import Pool, cpu_count import torch from datasets import Dataset, load_dataset split = "train" split_save_dir = "/tmp/random_split" def generate_random_example(): return { 'inputs': torch.randn(128).tolist(), 'indices': torch.randint(0, 10000, (2, 20000)).tolist(), 'values': torch.randn(20000).tolist(), } def generate_shard_dataset(examples_per_shard: int = 512): dataset_dict = { 'inputs': [], 'indices': [], 'values': [] } for _ in range(examples_per_shard): example = generate_random_example() dataset_dict['inputs'].append(example['inputs']) dataset_dict['indices'].append(example['indices']) dataset_dict['values'].append(example['values']) return Dataset.from_dict(dataset_dict) def save_shard(shard_idx, save_dir, examples_per_shard): shard_dataset = generate_shard_dataset(examples_per_shard) shard_write_path = Path(save_dir) / f"shard_{shard_idx}" shard_dataset.save_to_disk(shard_write_path) return str(Path(shard_write_path) / "data-00000-of-00001.arrow") def generate_split_shards(save_dir, num_shards: int = 16, examples_per_shard: int = 512): with Pool(cpu_count()) as pool: args = [(m, save_dir, examples_per_shard) for m in range(num_shards)] shard_filepaths = pool.starmap(save_shard, args) return shard_filepaths shard_filepaths = generate_split_shards(split_save_dir) ``` Load the dataset as IterableDataset: ``` random_dataset = load_dataset( "arrow", data_files={split: shard_filepaths}, streaming=True, split=split, ) random_dataset = random_dataset.with_format("numpy") ``` Observe the iterations/second when iterating over the dataset directly, and applying shuffling before iterating: Without shuffling, this gives ~1500 iterations/second ``` start_time = time.time() for count, item in enumerate(random_dataset): if count > 0 and count % 100 == 0: elapsed_time = time.time() - start_time iterations_per_second = count / elapsed_time print(f"Processed {count} items at an average of {iterations_per_second:.2f} iterations/second") ``` ``` Processed 100 items at an average of 705.74 iterations/second Processed 200 items at an average of 1169.68 iterations/second Processed 300 items at an average of 1497.97 iterations/second Processed 400 items at an average of 1739.62 iterations/second Processed 500 items at an average of 1931.11 iterations/second` ``` When shuffling, this gives ~3 iterations/second: ``` random_dataset = random_dataset.shuffle(buffer_size=100,seed=42) start_time = time.time() for count, item in enumerate(random_dataset): if count > 0 and count % 100 == 0: elapsed_time = time.time() - start_time iterations_per_second = count / elapsed_time print(f"Processed {count} items at an average of {iterations_per_second:.2f} iterations/second") ``` ``` Processed 100 items at an average of 3.75 iterations/second Processed 200 items at an average of 3.93 iterations/second ``` ### Expected behavior Iterations per second should be barely affected by shuffling, especially with a small buffer size ### Environment info Datasets version: 2.21.0 Python 3.10 Ubuntu 22.04
open
https://github.com/huggingface/datasets/issues/7102
2024-08-14T21:44:44
2024-08-15T16:17:31
null
{ "login": "lajd", "id": 13192126, "type": "User" }
[]
false
[]
2,466,510,783
7,101
`load_dataset` from Hub with `name` to specify `config` using incorrect builder type when multiple data formats are present
Following [documentation](https://huggingface.co/docs/datasets/repository_structure#define-your-splits-and-subsets-in-yaml) I had defined different configs for [`Dataception`](https://huggingface.co/datasets/bigdata-pw/Dataception), a dataset of datasets: ```yaml configs: - config_name: dataception data_files: - path: dataception.parquet split: train default: true - config_name: dataset_5423 data_files: - path: datasets/5423.tar split: train ... - config_name: dataset_721736 data_files: - path: datasets/721736.tar split: train ``` The intent was for metadata to be browsable via Dataset Viewer, in addition to each individual dataset, and to allow datasets to be loaded by specifying the config/name to `load_dataset`. While testing `load_dataset` I encountered the following error: ```python >>> dataset = load_dataset("bigdata-pw/Dataception", "dataset_7691") Downloading readme: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 467k/467k [00:00<00:00, 1.99MB/s] Downloading data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 71.0M/71.0M [00:02<00:00, 26.8MB/s] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "datasets\load.py", line 2145, in load_dataset builder_instance.download_and_prepare( File "datasets\builder.py", line 1027, in download_and_prepare self._download_and_prepare( File "datasets\builder.py", line 1100, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "datasets\packaged_modules\parquet\parquet.py", line 58, in _split_generators self.info.features = datasets.Features.from_arrow_schema(pq.read_schema(f)) ^^^^^^^^^^^^^^^^^ File "pyarrow\parquet\core.py", line 2325, in read_schema file = ParquetFile( ^^^^^^^^^^^^ File "pyarrow\parquet\core.py", line 318, in __init__ self.reader.open( File "pyarrow\_parquet.pyx", line 1470, in pyarrow._parquet.ParquetReader.open File "pyarrow\error.pxi", line 91, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file. ``` The correct file is downloaded, however the incorrect builder type is detected; `parquet` due to other content of the repository. It would appear that the config needs to be taken into account. Note that I have removed the additional configs from the repository because of this issue and there is a limit of 3000 configs anyway so the Dataset Viewer doesn't work as I intended. I'll add them back in if it assists with testing.
open
https://github.com/huggingface/datasets/issues/7101
2024-08-14T18:12:25
2024-08-18T10:33:38
null
{ "login": "hlky", "id": 106811348, "type": "User" }
[]
false
[]
2,465,529,414
7,100
IterableDataset: cannot resolve features from list of numpy arrays
### Describe the bug when resolve features of `IterableDataset`, got `pyarrow.lib.ArrowInvalid: Can only convert 1-dimensional array values` error. ``` Traceback (most recent call last): File "test.py", line 6 iter_ds = iter_ds._resolve_features() File "lib/python3.10/site-packages/datasets/iterable_dataset.py", line 2876, in _resolve_features features = _infer_features_from_batch(self.with_format(None)._head()) File "lib/python3.10/site-packages/datasets/iterable_dataset.py", line 63, in _infer_features_from_batch pa_table = pa.Table.from_pydict(batch) File "pyarrow/table.pxi", line 1813, in pyarrow.lib._Tabular.from_pydict File "pyarrow/table.pxi", line 5339, in pyarrow.lib._from_pydict File "pyarrow/array.pxi", line 374, in pyarrow.lib.asarray File "pyarrow/array.pxi", line 344, in pyarrow.lib.array File "pyarrow/array.pxi", line 42, in pyarrow.lib._sequence_to_array File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Can only convert 1-dimensional array values ``` ### Steps to reproduce the bug ```python from datasets import Dataset import numpy as np # create list of numpy iter_ds = Dataset.from_dict({'a': [[[1, 2, 3], [1, 2, 3]]]}).to_iterable_dataset().map(lambda x: {'a': [np.array(x['a'])]}) iter_ds = iter_ds._resolve_features() # errors here ``` ### Expected behavior features can be successfully resolved ### Environment info - `datasets` version: 2.21.0 - Platform: Linux-5.15.0-94-generic-x86_64-with-glibc2.35 - Python version: 3.10.13 - `huggingface_hub` version: 0.23.4 - PyArrow version: 15.0.0 - Pandas version: 2.2.0 - `fsspec` version: 2023.10.0
open
https://github.com/huggingface/datasets/issues/7100
2024-08-14T11:01:51
2024-10-03T05:47:23
null
{ "login": "VeryLazyBoy", "id": 18899212, "type": "User" }
[]
false
[]
2,465,221,827
7,099
Set dev version
null
closed
https://github.com/huggingface/datasets/pull/7099
2024-08-14T08:31:17
2024-08-14T08:45:17
2024-08-14T08:39:25
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,465,016,562
7,098
Release: 2.21.0
null
closed
https://github.com/huggingface/datasets/pull/7098
2024-08-14T06:35:13
2024-08-14T06:41:07
2024-08-14T06:41:06
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,458,455,489
7,097
Some of DownloadConfig's properties are always being overridden in load.py
### Describe the bug The `extract_compressed_file` and `force_extract` properties of DownloadConfig are always being set to True in the function `dataset_module_factory` in the `load.py` file. This behavior is very annoying because data extracted will just be ignored the next time the dataset is loaded. See this image below: ![image](https://github.com/user-attachments/assets/9e76ebb7-09b1-4c95-adc8-a959b536f93c) ### Steps to reproduce the bug 1. Have a local dataset that contains archived files (zip, tar.gz, etc) 2. Build a dataset loading script to download and extract these files 3. Run the load_dataset function with a DownloadConfig that specifically set `force_extract` to False 4. The extraction process will start no matter if the archives was extracted previously ### Expected behavior The extraction process should not run when the archives were previously extracted and `force_extract` is set to False. ### Environment info datasets==2.20.0 python3.9
open
https://github.com/huggingface/datasets/issues/7097
2024-08-09T18:26:37
2024-08-09T18:26:37
null
{ "login": "ductai199x", "id": 29772899, "type": "User" }
[]
false
[]
2,456,929,173
7,096
Automatically create `cache_dir` from `cache_file_name`
You get a pretty unhelpful error message when specifying a `cache_file_name` in a directory that doesn't exist, e.g. `cache_file_name="./cache/data.map"` ```python import datasets cache_file_name="./cache/train.map" dataset = datasets.load_dataset("ylecun/mnist") dataset["train"].map(lambda x: x, cache_file_name=cache_file_name) ``` ``` FileNotFoundError: [Errno 2] No such file or directory: '/.../cache/tmp48r61siw' ``` It is simple enough to create and I was expecting that this would have been the case. cc: @albertvillanova @lhoestq
closed
https://github.com/huggingface/datasets/pull/7096
2024-08-09T01:34:06
2024-08-15T17:25:26
2024-08-15T10:13:22
{ "login": "ringohoffman", "id": 27844407, "type": "User" }
[]
true
[]
2,454,418,130
7,094
Add Arabic Docs to Datasets
Translate Docs into Arabic issue-number : #7093 [Arabic Docs](https://github.com/AhmedAlmaghz/datasets/blob/main/docs/source/ar/index.mdx) [English Docs](https://github.com/AhmedAlmaghz/datasets/blob/main/docs/source/en/index.mdx) @stevhliu
open
https://github.com/huggingface/datasets/pull/7094
2024-08-07T21:53:06
2024-08-07T21:53:06
null
{ "login": "AhmedAlmaghz", "id": 53489256, "type": "User" }
[]
true
[]
2,454,413,074
7,093
Add Arabic Docs to datasets
### Feature request Add Arabic Docs to datasets [Datasets Arabic](https://github.com/AhmedAlmaghz/datasets/blob/main/docs/source/ar/index.mdx) ### Motivation @AhmedAlmaghz https://github.com/AhmedAlmaghz/datasets/blob/main/docs/source/ar/index.mdx ### Your contribution @AhmedAlmaghz https://github.com/AhmedAlmaghz/datasets/blob/main/docs/source/ar/index.mdx
open
https://github.com/huggingface/datasets/issues/7093
2024-08-07T21:48:05
2024-08-07T21:48:05
null
{ "login": "AhmedAlmaghz", "id": 53489256, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,451,393,658
7,092
load_dataset with multiple jsonlines files interprets datastructure too early
### Describe the bug likely related to #6460 using `datasets.load_dataset("json", data_dir= ... )` with multiple `.jsonl` files will error if one of the files (maybe the first file?) contains a full column of empty data. ### Steps to reproduce the bug real world example: data is available in this [PR-branch](https://github.com/Vipitis/shadertoys-dataset/pull/3/commits/cb1e7157814f74acb09d5dc2f1be3c0a868a9933). Because my files are chunked by months, some months contain all empty data for some columns, just by chance - these are `[]`. Otherwise it's all the same structure. ```python from datasets import load_dataset ds = load_dataset("json", data_dir="./data/annotated/api") ``` you get a long error trace, where in the middle it says something like ```cs TypeError: Couldn't cast array of type struct<id: int64, src: string, ctype: string, channel: int64, sampler: struct<filter: string, wrap: string, vflip: string, srgb: string, internal: string>, published: int64> to null ``` toy example: (on request) ### Expected behavior Some suggestions 1. give a better error message to the user 2. consider all files before deciding on a data structure for a given column. 3. if you encounter a new structure, and can't cast that to null, replace the null-hypothesis. (maybe something for pyarrow) as a workaround I have lazily implemented the following (essentially step 2) ```python import os import jsonlines import datasets api_files = os.listdir("./data/annotated/api") api_files = [f"./data/annotated/api/{f}" for f in api_files] api_file_contents = [] for f in api_files: with jsonlines.open(f) as reader: for obj in reader: api_file_contents.append(obj) ds = datasets.Dataset.from_list(api_file_contents) ``` this works fine for my usecase, but is potentially slower and less memory efficient for really large datasets (where this is unlikely to happen in the first place). ### Environment info - `datasets` version: 2.20.0 - Platform: Windows-10-10.0.19041-SP0 - Python version: 3.9.4 - `huggingface_hub` version: 0.23.4 - PyArrow version: 16.1.0 - Pandas version: 2.2.2 - `fsspec` version: 2023.10.0
open
https://github.com/huggingface/datasets/issues/7092
2024-08-06T17:42:55
2024-08-08T16:35:01
null
{ "login": "Vipitis", "id": 23384483, "type": "User" }
[]
false
[]
2,449,699,490
7,090
The test test_move_script_doesnt_change_hash fails because it runs the 'python' command while the python executable has a different name
### Describe the bug Tests should use the same pythin path as they are launched with, which in the case of FreeBSD is /usr/local/bin/python3.11 Failure: ``` if err_filename is not None: > raise child_exception_type(errno_num, err_msg, err_filename) E FileNotFoundError: [Errno 2] No such file or directory: 'python' ``` ### Steps to reproduce the bug regular test run using PyTest ### Expected behavior n/a ### Environment info FreeBSD 14.1
open
https://github.com/huggingface/datasets/issues/7090
2024-08-06T00:35:05
2024-08-06T00:35:05
null
{ "login": "yurivict", "id": 271906, "type": "User" }
[]
false
[]
2,449,479,500
7,089
Missing pyspark dependency causes the testsuite to error out, instead of a few tests to be skipped
### Describe the bug see the subject ### Steps to reproduce the bug regular tests ### Expected behavior n/a ### Environment info version 2.20.0
open
https://github.com/huggingface/datasets/issues/7089
2024-08-05T21:05:11
2024-08-05T21:05:11
null
{ "login": "yurivict", "id": 271906, "type": "User" }
[]
false
[]
2,447,383,940
7,088
Disable warning when using with_format format on tensors
### Feature request If we write this code: ```python """Get data and define datasets.""" from enum import StrEnum from datasets import load_dataset from torch.utils.data import DataLoader from torchvision import transforms class Split(StrEnum): """Describes what type of split to use in the dataloader""" TRAIN = "train" TEST = "test" VAL = "validation" class ImageNetDataLoader(DataLoader): """Create an ImageNetDataloader""" _preprocess_transform = transforms.Compose( [ transforms.Resize(256), transforms.CenterCrop(224), ] ) def __init__(self, batch_size: int = 4, split: Split = Split.TRAIN): dataset = ( load_dataset( "imagenet-1k", split=split, trust_remote_code=True, streaming=True, ) .with_format("torch") .map(self._preprocess) ) super().__init__(dataset=dataset, batch_size=batch_size) def _preprocess(self, data): if data["image"].shape[0] < 3: data["image"] = data["image"].repeat(3, 1, 1) data["image"] = self._preprocess_transform(data["image"].float()) return data if __name__ == "__main__": dataloader = ImageNetDataLoader(batch_size=2) for batch in dataloader: print(batch["image"]) break ``` This will trigger an user warning : ```bash datasets\formatting\torch_formatter.py:85: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). return torch.tensor(value, **{**default_dtype, **self.torch_tensor_kwargs}) ``` ### Motivation This happens because the the way the formatted tensor is returned in `TorchFormatter._tensorize`. This function handle values of different types, according to some tests it seems that possible value types are `int`, `numpy.ndarray` and `torch.Tensor`. In particular this warning is triggered when the value type is `torch.Tensor`, because is not the suggested Pytorch way of doing it: - https://stackoverflow.com/questions/55266154/pytorch-preferred-way-to-copy-a-tensor - https://discuss.pytorch.org/t/it-is-recommended-to-use-source-tensor-clone-detach-or-sourcetensor-clone-detach-requires-grad-true/101218#:~:text=The%20warning%20points%20to%20wrapping%20a%20tensor%20in%20torch.tensor%2C%20which%20is%20not%20recommended.%0AInstead%20of%20torch.tensor(outputs)%20use%20outputs.clone().detach()%20or%20the%20same%20with%20.requires_grad_(True)%2C%20if%20necessary. ### Your contribution A solution that I found to be working is to change the current way of doing it: ```python return torch.tensor(value, **{**default_dtype, **self.torch_tensor_kwargs}) ``` To: ```python if (isinstance(value, torch.Tensor)): tensor = value.clone().detach() if self.torch_tensor_kwargs.get('requires_grad', False): tensor.requires_grad_() return tensor else: return torch.tensor(value, **{**default_dtype, **self.torch_tensor_kwargs}) ```
open
https://github.com/huggingface/datasets/issues/7088
2024-08-05T00:45:50
2024-08-05T00:45:50
null
{ "login": "Haislich", "id": 42048782, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,447,158,643
7,087
Unable to create dataset card for Lushootseed language
### Feature request While I was creating the dataset which contained all documents from the Lushootseed Wikipedia, the dataset card asked me to enter which language the dataset was in. Since Lushootseed is a critically endangered language, it was not available as one of the options. Is it possible to allow entering languages that aren't available in the options? ### Motivation I'd like to add more information about my dataset in the dataset card, and the language is one of the most important pieces of information, since the entire dataset is primarily concerned collecting Lushootseed documents. ### Your contribution I can submit a pull request
closed
https://github.com/huggingface/datasets/issues/7087
2024-08-04T14:27:04
2024-08-06T06:59:23
2024-08-06T06:59:22
{ "login": "vaishnavsudarshan", "id": 134876525, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,445,516,829
7,086
load_dataset ignores cached datasets and tries to hit HF Hub, resulting in API rate limit errors
### Describe the bug I have been running lm-eval-harness a lot which has results in an API rate limit. This seems strange, since all of the data should be cached locally. I have in fact verified this. ### Steps to reproduce the bug 1. Be Me 2. Run `load_dataset("TAUR-Lab/MuSR")` 3. Hit rate limit error 4. Dataset is in .cache/huggingface/datasets 5. ??? ### Expected behavior We should not run into API rate limits if we have cached the dataset ### Environment info datasets 2.16.0 python 3.10.4
open
https://github.com/huggingface/datasets/issues/7086
2024-08-02T18:12:23
2025-06-16T18:43:29
null
{ "login": "tginart", "id": 11379648, "type": "User" }
[]
false
[]
2,440,008,618
7,085
[Regression] IterableDataset is broken on 2.20.0
### Describe the bug In the latest version of datasets there is a major regression, after creating an `IterableDataset` from a generator and applying a few operations (`map`, `select`), you can no longer iterate through the dataset multiple times. The issue seems to stem from the recent addition of "resumable IterableDatasets" (#6658) (@lhoestq). It seems like it's keeping state when it shouldn't. ### Steps to reproduce the bug Minimal Reproducible Example (comparing `datasets==2.17.0` and `datasets==2.20.0`) ``` #!/bin/bash # List of dataset versions to test versions=("2.17.0" "2.20.0") # Loop through each version for version in "${versions[@]}"; do # Install the specific version of the datasets library pip3 install -q datasets=="$version" 2>/dev/null # Run the Python script python3 - <<EOF from datasets import IterableDataset from datasets.features.features import Features, Value def test_gen(): yield from [{"foo": i} for i in range(10)] features = Features([("foo", Value("int64"))]) d = IterableDataset.from_generator(test_gen, features=features) mapped = d.map(lambda row: {"foo": row["foo"] * 2}) column = mapped.select_columns(["foo"]) print("Version $version - Iterate Once:", list(column)) print("Version $version - Iterate Twice:", list(column)) EOF done ``` The output looks like this: ``` Version 2.17.0 - Iterate Once: [{'foo': 0}, {'foo': 2}, {'foo': 4}, {'foo': 6}, {'foo': 8}, {'foo': 10}, {'foo': 12}, {'foo': 14}, {'foo': 16}, {'foo': 18}] Version 2.17.0 - Iterate Twice: [{'foo': 0}, {'foo': 2}, {'foo': 4}, {'foo': 6}, {'foo': 8}, {'foo': 10}, {'foo': 12}, {'foo': 14}, {'foo': 16}, {'foo': 18}] Version 2.20.0 - Iterate Once: [{'foo': 0}, {'foo': 2}, {'foo': 4}, {'foo': 6}, {'foo': 8}, {'foo': 10}, {'foo': 12}, {'foo': 14}, {'foo': 16}, {'foo': 18}] Version 2.20.0 - Iterate Twice: [] ``` ### Expected behavior The expected behavior is it version 2.20.0 should behave the same as 2.17.0. ### Environment info `datasets==2.20.0` on any platform.
closed
https://github.com/huggingface/datasets/issues/7085
2024-07-31T13:01:59
2024-08-22T14:49:37
2024-08-22T14:49:07
{ "login": "AjayP13", "id": 5404177, "type": "User" }
[]
false
[]
2,439,519,534
7,084
More easily support streaming local files
### Feature request Simplify downloading and streaming datasets locally. Specifically, perhaps add an option to `load_dataset(..., streaming="download_first")` or add better support for streaming symlinked or arrow files. ### Motivation I have downloaded FineWeb-edu locally and currently trying to stream the dataset from the local files. I have both the raw parquet files using `hugginface-cli download --repo-type dataset HuggingFaceFW/fineweb-edu` and the processed arrow files using `load_dataset("HuggingFaceFW/fineweb-edu")`. Streaming the files locally does not work well for both file types for two different reasons. **Arrow files** When running `load_dataset("arrow", data_files={"train": "~/.cache/huggingface/datasets/HuggingFaceFW___fineweb-edu/default/0.0.0/5b89d1ea9319fe101b3cbdacd89a903aca1d6052/fineweb-edu-train-*.arrow"})` resolving the data files is fast, but because `arrow` is not included in the known [extensions file list](https://github.com/huggingface/datasets/blob/ce4a0c573920607bc6c814605734091b06b860e7/src/datasets/utils/file_utils.py#L738) , all files are opened and scanned to determine the compression type. Adding `arrow` to the known extension types resolves this issue. **Parquet files** When running `load_dataset("arrow", data_files={"train": "~/.cache/huggingface/hub/dataset-HuggingFaceFW___fineweb-edu/snapshots/5b89d1ea9319fe101b3cbdacd89a903aca1d6052/data/CC-MAIN-*/train-*.parquet"})` the paths do not get resolved because the parquet files are symlinked from the blobs (which contain all files in case there are different versions). This occurs because the [pattern matching](https://github.com/huggingface/datasets/blob/ce4a0c573920607bc6c814605734091b06b860e7/src/datasets/data_files.py#L389) checks if the path is a file and does not check for symlinks. Symlinks (at least on my machine) are of type "other". ### Your contribution I have created a PR for fixing arrow file streaming and symlinks. However, I have not checked locally if the tests work or new tests need to be added. IMO, the easiest option would be to add a `streaming=download_first` option, but I'm afraid that exceeds my current knowledge of how the datasets library works. https://github.com/huggingface/datasets/pull/7083
open
https://github.com/huggingface/datasets/issues/7084
2024-07-31T09:03:15
2024-07-31T09:05:58
null
{ "login": "fschlatt", "id": 23191892, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
2,439,518,466
7,083
fix streaming from arrow files
null
closed
https://github.com/huggingface/datasets/pull/7083
2024-07-31T09:02:42
2024-08-30T15:17:03
2024-08-30T15:17:03
{ "login": "fschlatt", "id": 23191892, "type": "User" }
[]
true
[]
2,437,354,975
7,082
Support HTTP authentication in non-streaming mode
Support HTTP authentication in non-streaming mode, by support passing HTTP storage_options in non-streaming mode. - Note that currently, HTTP authentication is supported only in streaming mode. For example, this is necessary if a remote HTTP host requires authentication to download the data.
closed
https://github.com/huggingface/datasets/pull/7082
2024-07-30T09:25:49
2024-08-08T08:29:55
2024-08-08T08:24:06
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,437,059,657
7,081
Set load_from_disk path type as PathLike
Set `load_from_disk` path type as `PathLike`. This way it is aligned with `save_to_disk`.
closed
https://github.com/huggingface/datasets/pull/7081
2024-07-30T07:00:38
2024-07-30T08:30:37
2024-07-30T08:21:50
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,434,275,664
7,080
Generating train split takes a long time
### Describe the bug Loading a simple webdataset takes ~45 minutes. ### Steps to reproduce the bug ``` from datasets import load_dataset dataset = load_dataset("PixArt-alpha/SAM-LLaVA-Captions10M") ``` ### Expected behavior The dataset should load immediately as it does when loaded through a normal indexed WebDataset loader. Generating splits should be optional and there should be a message showing how to disable it. ### Environment info - `datasets` version: 2.20.0 - Platform: Linux-4.18.0-372.32.1.el8_6.x86_64-x86_64-with-glibc2.28 - Python version: 3.10.14 - `huggingface_hub` version: 0.24.1 - PyArrow version: 16.1.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.5.0
open
https://github.com/huggingface/datasets/issues/7080
2024-07-29T01:42:43
2024-10-02T15:31:22
null
{ "login": "alexanderswerdlow", "id": 35648800, "type": "User" }
[]
false
[]
2,433,363,298
7,079
HfHubHTTPError: 500 Server Error: Internal Server Error for url:
### Describe the bug newly uploaded datasets, since yesterday, yields an error. old datasets, works fine. Seems like the datasets api server returns a 500 I'm getting the same error, when I invoke `load_dataset` with my dataset. Long discussion about it here, but I'm not sure anyone from huggingface have seen it. https://discuss.huggingface.co/t/hfhubhttperror-500-server-error-internal-server-error-for-url/99580/1 ### Steps to reproduce the bug this api url: https://huggingface.co/api/datasets/neoneye/simon-arc-shape-v4-rev3 respond with: ``` {"error":"Internal Error - We're working hard to fix this as soon as possible!"} ``` ### Expected behavior return no error with newer datasets. With older datasets I can load the datasets fine. ### Environment info # Browser When I access the api in the browser: https://huggingface.co/api/datasets/neoneye/simon-arc-shape-v4-rev3 ``` {"error":"Internal Error - We're working hard to fix this as soon as possible!"} ``` ### Request headers ``` Accept text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8 Accept-Encoding gzip, deflate, br, zstd Accept-Language en-US,en;q=0.5 Connection keep-alive Host huggingface.co Priority u=1 Sec-Fetch-Dest document Sec-Fetch-Mode navigate Sec-Fetch-Site cross-site Upgrade-Insecure-Requests 1 User-Agent Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:127.0) Gecko/20100101 Firefox/127.0 ``` ### Response headers ``` X-Firefox-Spdy h2 access-control-allow-origin https://huggingface.co access-control-expose-headers X-Repo-Commit,X-Request-Id,X-Error-Code,X-Error-Message,X-Total-Count,ETag,Link,Accept-Ranges,Content-Range content-length 80 content-type application/json; charset=utf-8 cross-origin-opener-policy same-origin date Fri, 26 Jul 2024 19:09:45 GMT etag W/"50-9qrwU+BNI4SD0Fe32p/nofkmv0c" referrer-policy strict-origin-when-cross-origin vary Origin via 1.1 1624c79cd07e6098196697a6a7907e4a.cloudfront.net (CloudFront) x-amz-cf-id SP8E7n5qRaP6i9c9G83dNAiOzJBU4GXSrDRAcVNTomY895K35H0nJQ== x-amz-cf-pop CPH50-C1 x-cache Error from cloudfront x-error-message Internal Error - We're working hard to fix this as soon as possible! x-powered-by huggingface-moon x-request-id Root=1-66a3f479-026417465ef42f49349fdca1 ```
closed
https://github.com/huggingface/datasets/issues/7079
2024-07-27T08:21:03
2024-09-20T13:26:25
2024-07-27T19:52:30
{ "login": "neoneye", "id": 147971, "type": "User" }
[]
false
[]
2,433,270,271
7,078
Fix CI test_convert_to_parquet
Fix `test_convert_to_parquet` by patching `HfApi.preupload_lfs_files` and revert temporary fix: - #7074
closed
https://github.com/huggingface/datasets/pull/7078
2024-07-27T05:32:40
2024-07-27T05:50:57
2024-07-27T05:44:32
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,432,345,489
7,077
column_names ignored by load_dataset() when loading CSV file
### Describe the bug load_dataset() ignores the column_names kwarg when loading a CSV file. Instead, it uses whatever values are on the first line of the file. ### Steps to reproduce the bug Call `load_dataset` to load data from a CSV file and specify `column_names` kwarg. ### Expected behavior The resulting dataset should have the specified column names **and** the first line of the file should be considered as data values. ### Environment info - `datasets` version: 2.20.0 - Platform: Linux-5.10.0-30-cloud-amd64-x86_64-with-glibc2.31 - Python version: 3.9.2 - `huggingface_hub` version: 0.24.2 - PyArrow version: 17.0.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.5.0
open
https://github.com/huggingface/datasets/issues/7077
2024-07-26T14:18:04
2024-07-30T07:52:26
null
{ "login": "luismsgomes", "id": 9130265, "type": "User" }
[]
false
[]
2,432,275,393
7,076
πŸ§ͺ Do not mock create_commit
null
closed
https://github.com/huggingface/datasets/pull/7076
2024-07-26T13:44:42
2024-07-27T05:48:17
2024-07-27T05:48:17
{ "login": "coyotte508", "id": 342922, "type": "User" }
[]
true
[]
2,432,027,412
7,075
Update required soxr version from pre-release to release
Update required `soxr` version from pre-release to release 0.4.0: https://github.com/dofuuz/python-soxr/releases/tag/v0.4.0
closed
https://github.com/huggingface/datasets/pull/7075
2024-07-26T11:24:35
2024-07-26T11:46:52
2024-07-26T11:40:49
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,431,772,703
7,074
Fix CI by temporarily marking test_convert_to_parquet as expected to fail
As a hotfix for CI, temporarily mark test_convert_to_parquet as expected to fail. Fix #7073. Revert once root cause is fixed.
closed
https://github.com/huggingface/datasets/pull/7074
2024-07-26T09:03:33
2024-07-26T09:23:33
2024-07-26T09:16:12
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,431,706,568
7,073
CI is broken for convert_to_parquet: Invalid rev id: refs/pr/1 404 error causes RevisionNotFoundError
See: https://github.com/huggingface/datasets/actions/runs/10095313567/job/27915185756 ``` FAILED tests/test_hub.py::test_convert_to_parquet - huggingface_hub.utils._errors.RevisionNotFoundError: 404 Client Error. (Request ID: Root=1-66a25839-31ce7b475e70e7db1e4d44c2;b0c8870f-d5ef-4bf2-a6ff-0191f3df0f64) Revision Not Found for url: https://hub-ci.huggingface.co/api/datasets/__DUMMY_TRANSFORMERS_USER__/test-dataset-5188a8-17219154347516/preupload/refs%2Fpr%2F1. Invalid rev id: refs/pr/1 ``` ``` /opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/datasets/hub.py:86: in convert_to_parquet dataset.push_to_hub( /opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/datasets/dataset_dict.py:1722: in push_to_hub split_additions, uploaded_size, dataset_nbytes = self[split]._push_parquet_shards_to_hub( /opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/datasets/arrow_dataset.py:5511: in _push_parquet_shards_to_hub api.preupload_lfs_files( /opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/huggingface_hub/hf_api.py:4231: in preupload_lfs_files _fetch_upload_modes( /opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py:118: in _inner_fn return fn(*args, **kwargs) /opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/huggingface_hub/_commit_api.py:507: in _fetch_upload_modes hf_raise_for_status(resp) ```
closed
https://github.com/huggingface/datasets/issues/7073
2024-07-26T08:27:41
2024-07-27T05:48:02
2024-07-26T09:16:13
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
false
[]
2,430,577,916
7,072
nm
null
closed
https://github.com/huggingface/datasets/issues/7072
2024-07-25T17:03:24
2024-07-25T20:36:11
2024-07-25T20:36:11
{ "login": "brettdavies", "id": 26392883, "type": "User" }
[]
false
[]
2,430,313,011
7,071
Filter hangs
### Describe the bug When trying to filter my custom dataset, the process hangs, regardless of the lambda function used. It appears to be an issue with the way the Images are being handled. The dataset in question is a preprocessed version of https://huggingface.co/datasets/danaaubakirova/patfig where notably, I have converted the data to the Parquet format. ### Steps to reproduce the bug ```python from datasets import load_dataset ds = load_dataset('lcolonn/patfig', split='test') ds_filtered = ds.filter(lambda row: row['cpc_class'] != 'Y') ``` Eventually I ctrl+C and I obtain this stack trace: ``` >>> ds_filtered = ds.filter(lambda row: row['cpc_class'] != 'Y') Filter: 0%| | 0/998 [00:00<?, ? examples/s]Filter: 0%| | 0/998 [00:35<?, ? examples/s] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 567, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/fingerprint.py", line 482, in wrapper out = func(dataset, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 3714, in filter indices = self.map( ^^^^^^^^^ File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 602, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 567, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 3161, in map for rank, done, content in Dataset._map_single(**dataset_kwargs): File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 3552, in _map_single batch = apply_function_on_filtered_inputs( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 3421, in apply_function_on_filtered_inputs processed_inputs = function(*fn_args, *additional_args, **fn_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 6478, in get_indices_from_mask_function num_examples = len(batch[next(iter(batch.keys()))]) ~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/formatting/formatting.py", line 273, in __getitem__ value = self.format(key) ^^^^^^^^^^^^^^^^ File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/formatting/formatting.py", line 376, in format return self.formatter.format_column(self.pa_table.select([key])) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/formatting/formatting.py", line 443, in format_column column = self.python_features_decoder.decode_column(column, pa_table.column_names[0]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/formatting/formatting.py", line 219, in decode_column return self.features.decode_column(column, column_name) if self.features else column ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/features/features.py", line 2008, in decode_column [decode_nested_example(self[column_name], value) if value is not None else None for value in column] File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/features/features.py", line 2008, in <listcomp> [decode_nested_example(self[column_name], value) if value is not None else None for value in column] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/features/features.py", line 1351, in decode_nested_example return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/features/image.py", line 188, in decode_example image.load() # to avoid "Too many open files" errors ^^^^^^^^^^^^ File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/PIL/ImageFile.py", line 293, in load n, err_code = decoder.decode(b) ^^^^^^^^^^^^^^^^^ KeyboardInterrupt ``` Warning! This can even seem to cause some computers to crash. ### Expected behavior Should return the filtered dataset ### Environment info - `datasets` version: 2.20.0 - Platform: Linux-6.5.0-41-generic-x86_64-with-glibc2.35 - Python version: 3.11.9 - `huggingface_hub` version: 0.24.0 - PyArrow version: 17.0.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.5.0
open
https://github.com/huggingface/datasets/issues/7071
2024-07-25T15:29:05
2024-07-25T15:36:59
null
{ "login": "lucienwalewski", "id": 61711045, "type": "User" }
[]
false
[]
2,430,285,235
7,070
how set_transform affects batch size?
### Describe the bug I am trying to fine-tune w2v-bert for ASR task. Since my dataset is so big, I preferred to use the on-the-fly method with set_transform. So i change the preprocessing function to this: ``` def prepare_dataset(batch): input_features = processor(batch["audio"], sampling_rate=16000).input_features[0] input_length = len(input_features) labels = processor.tokenizer(batch["text"], padding=False).input_ids batch = { "input_features": [input_features], "input_length": [input_length], "labels": [labels] } return batch train_ds.set_transform(prepare_dataset) val_ds.set_transform(prepare_dataset) ``` After this, I also had to change the DataCollatorCTCWithPadding class like this: ``` @dataclass class DataCollatorCTCWithPadding: processor: Wav2Vec2BertProcessor padding: Union[bool, str] = True def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]: # Separate input_features and labels input_features = [{"input_features": feature["input_features"][0]} for feature in features] labels = [feature["labels"][0] for feature in features] # Pad input features batch = self.processor.pad( input_features, padding=self.padding, return_tensors="pt", ) # Pad and process labels label_features = self.processor.tokenizer.pad( {"input_ids": labels}, padding=self.padding, return_tensors="pt", ) labels = label_features["input_ids"] attention_mask = label_features["attention_mask"] # Replace padding with -100 to ignore these tokens during loss calculation labels = labels.masked_fill(attention_mask.ne(1), -100) batch["labels"] = labels return batch ``` But now a strange thing is happening, no matter how much I increase the batch size, the amount of V-RAM GPU usage does not change, while the number of total steps in the progress-bar (logging) changes. Is this normal or have I made a mistake? ### Steps to reproduce the bug i can share my code if needed ### Expected behavior Equal to the batch size value, the set_transform function is applied to the dataset and given to the model as a batch. ### Environment info all updated versions
open
https://github.com/huggingface/datasets/issues/7070
2024-07-25T15:19:34
2024-07-25T15:19:34
null
{ "login": "VafaKnm", "id": 103993288, "type": "User" }
[]
false
[]
2,429,281,339
7,069
Fix push_to_hub by not calling create_branch if PR branch
Fix push_to_hub by not calling create_branch if PR branch (e.g. `refs/pr/1`). Note that currently create_branch raises a 400 Bad Request error if the user passes a PR branch (e.g. `refs/pr/1`). EDIT: ~~Fix push_to_hub by not calling create_branch if branch exists.~~ Note that currently create_branch raises a 403 Forbidden error even if all these conditions are met: - exist_ok is passed - the branch already exists - the user does not have WRITE permission Fix #7067. Related issue: - https://github.com/huggingface/huggingface_hub/issues/2419
closed
https://github.com/huggingface/datasets/pull/7069
2024-07-25T07:50:04
2024-07-31T07:10:07
2024-07-30T10:51:01
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
2,426,657,434
7,068
Fix prepare_single_hop_path_and_storage_options
Fix `_prepare_single_hop_path_and_storage_options`: - Do not pass HF authentication headers and HF user-agent to non-HF HTTP URLs - Do not overwrite passed `storage_options` nested values: - Before, when passed ```DownloadConfig(storage_options={"https": {"client_kwargs": {"raise_for_status": True}}})```, it was overwritten to ```{"https": {"client_kwargs": {"trust_env": True}}}``` - Now, the result combines both: ```{"https": {"client_kwargs": {"trust_env": True, "raise_for_status": True}}}```
closed
https://github.com/huggingface/datasets/pull/7068
2024-07-24T05:52:34
2024-07-29T07:02:07
2024-07-29T06:56:15
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]