The dataset viewer is not available for this subset.
Exception: SplitsNotFoundError
Message: The split names could not be parsed from the dataset config.
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 286, in get_dataset_config_info
for split_generator in builder._split_generators(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 82, in _split_generators
raise ValueError(
ValueError: The TAR archives of the dataset should be in WebDataset format, but the files in the archive don't share the same prefix or the same types.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
for split in get_dataset_split_names(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 340, in get_dataset_split_names
info = get_dataset_config_info(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 291, in get_dataset_config_info
raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
Low-Light Restoration Dataset
Paired low-light / normal-light image patches for training and evaluation.
Contents of low-light.tar
low-light/
├── train/ 30,000 paired patches (60,000 files)
├── val/ 1,000 paired patches (2,000 files)
├── test/ 1,000 input patches (no ground truth)
└── dataset.py reference PyTorch Dataset (Python 3.10+)
All images are lossless WebP (.webp).
File naming
Every image is named <id>-<role>.webp where role ∈ {in, gt}:
| Split | Files |
|---|---|
train/ |
<id>-in.webp paired with <id>-gt.webp (30,000 pairs) |
val/ |
<id>-in.webp paired with <id>-gt.webp (1,000 pairs) |
test/ |
<id>-in.webp only — no GT is provided |
<id> is opaque; do not parse it. Pairing is by exact stem match.
Quick start
from pathlib import Path
from torch.utils.data import DataLoader
from dataset import (
PairedLowLightDataset, TestLowLightDataset,
PairedCompose, PairedRandomCrop, PairedRandomFlip, PairedToTensor,
)
root = Path("low-light")
train_tf = PairedCompose([
PairedRandomCrop(256),
PairedRandomFlip(p_h=0.5),
PairedToTensor(),
])
train_set = PairedLowLightDataset(root / "train", transform=train_tf)
val_set = PairedLowLightDataset(root / "val", transform=None)
test_set = TestLowLightDataset(root / "test", transform=None)
train_loader = DataLoader(train_set, batch_size=16, shuffle=True, num_workers=4)
val_loader = DataLoader(val_set, batch_size=1, shuffle=False, num_workers=2)
test_loader = DataLoader(test_set, batch_size=1, shuffle=False, num_workers=2)
for x, y in train_loader: # x, y are float32 CHW tensors in [0, 1]
...
for x, stem in test_loader: # test yields (input, stem)
pred = model(x)
save_image(pred, f"submission/{stem[0]}-in.webp")
Dataset classes (in dataset.py)
PairedLowLightDataset(root, transform=None)
For train/ and val/. Returns (input_tensor, gt_tensor).
TestLowLightDataset(root, transform=None)
For test/. Returns (input_tensor, stem) where stem lets you save predictions
under the original filename.
Transform contract
A transform may be one of:
None— images are converted tofloat32CHW tensors in[0, 1].- A single-image torchvision-style callable
fn(pil) -> tensor— applied independently to input and GT. Use only for deterministic ops (ToTensor,Normalize). Random single-image transforms will desync the pair. - A pair-aware callable
fn(in_pil, gt_pil) -> (in_tensor, gt_tensor), marked by settingfn.paired = True. The callable owns randomness and must apply the same geometric augmentation to both images.
The provided PairedCompose, PairedRandomCrop, PairedRandomFlip,
PairedToTensor building blocks already follow contract #3.
Submission format
For each <id>-in.webp in test/, produce a restored image and save it as
<id>-in.webp (or .png if preferred). Keep the original stem.
Evaluation pairs each prediction against the private ground truth held by the organizers — do not attempt to obtain or infer test GTs.
Requirements
- Python 3.10+
torch,torchvision,Pillow(with WebP support; built into modern Pillow)
- Downloads last month
- -