Datasets:
The dataset viewer is not available for this split.
Error code: StreamingRowsError
Exception: CastError
Message: Couldn't cast
jpg: binary
__key__: string
__url__: string
pickle: null
to
{'pickle': Value('binary'), '__key__': Value('string'), '__url__': Value('string')}
because column names don't match
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
return get_rows(
^^^^^^^^^
File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2674, in __iter__
for key, example in ex_iterable:
^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2208, in __iter__
for key, pa_table in self._iter_arrow():
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2241, in _iter_arrow
pa_table = cast_table_to_features(pa_table, self.features)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2192, in cast_table_to_features
raise CastError(
datasets.table.CastError: Couldn't cast
jpg: binary
__key__: string
__url__: string
pickle: null
to
{'pickle': Value('binary'), '__key__': Value('string'), '__url__': Value('string')}
because column names don't matchNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
YAML Metadata Warning:The task_categories "graph-machine-learning" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-ranking, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, image-text-to-image, image-text-to-video, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, video-to-video, other
WildRoad
Beyond Endpoints: Path-Centric Reasoning for Vectorized Off-Road Network Extraction
Official Repository: xiaofei-guan/MaGRoad
WildRoad is a global off-road road network dataset constructed efficiently with a dedicated interactive annotation tool tailored for road-network labeling. It addresses the lack of large-scale vectorized datasets in off-road environments and provides a benchmark for challenging terrains.
WildRoad Dataset Processing Pipeline
Note on Dataset Size: The fully processed dataset is quite large. If downloading the processed patches is inconvenient, you can download the raw source data instead and use the scripts provided here to perfectly reproduce the dataset.
This repository contains scripts to process large-scale remote sensing images and their corresponding road network graphs into smaller, trainable patches.
Overview
The processing pipeline employs two strategies to crop the large map into patches:
- Strategy A (Non-overlapping): Crops the image using a regular, non-overlapping grid.
- Strategy B (Overlapping): Crops the image using an overlapping sliding window.
Filtering & De-duplication: All patches must meet a minimum road length density threshold. To avoid data redundancy, the pipeline uses the Weisfeiler-Lehman (WL) topological similarity algorithm. If an overlapping patch (Strategy B) contains a road topology that is too similar to the existing non-overlapping patches (Strategy A), it will be discarded. Otherwise, it will be kept as a valuable topological supplement.
Folder Structure
Before running the script, ensure your raw data is organized into split folders. Inside each folder, images (.jpg) and their corresponding graph files (.pickle) should be paired by name (e.g., data0.jpg and data0.pickle).
Project Root/
βββ script/
β βββ process_single_split.py
β βββ crop_patch_from_pickle_parallel.py
β βββ ...
βββ train/
βββ val/
βββ test/
How It Works
The main entry point is process_single_split.py. When you run it on a target folder (e.g., test), the script will:
- Find all image-graph pairs in the folder.
- Parallelly crop the large data into candidates and save them temporarily in
{split}_processed/. - Filter redundant topological patches using WL similarity.
- Collect the final valid patches into
{split}_patches/directory.
The output will contain two subdirectories for each split:
{split}_A: Contains strictly non-overlapping patches.{split}_AB: Contains both non-overlapping and selected overlapping patches.
Note: Only the RGB image and the graph data are kept in the final output. The debug masks are ignored to save disk space.
Usage
Process each split sequentially by running the following commands from the Project Root:
# 1. Process the training set
python script/process_single_split.py train --workers 4
# 2. Process the validation set
python script/process_single_split.py val --workers 4
# 3. Process the test set
python script/process_single_split.py test --workers 4
Optional Arguments:
--workers: Number of parallel threads to speed up the cropping process (default is 4).--patch_size: Output patch size (default is 1024).--sim_threshold: WL similarity threshold to discard redundant B patches (default is 0.7).
Verification (Expected Patch Counts)
After running the processing scripts, you can verify your results by checking the number of generated patches. The expected counts of data pairs (image + graph) for each split are as follows:
| Split | Strategy A (_A) |
Strategy A+B (_AB) |
|---|---|---|
| train | 5566 | 12896 |
| val | 1306 | 2986 |
| test | 1146 | 2666 |
- Downloads last month
- 138