WilliamBonilla62 commited on
Commit
ee28954
·
verified ·
1 Parent(s): ffec344

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +143 -23
README.md CHANGED
@@ -1,25 +1,145 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
- dataset_info:
3
- features:
4
- - name: image
5
- dtype: image
6
- - name: mask
7
- dtype: image
8
- - name: scene
9
- dtype: string
10
- - name: filename
11
- dtype: string
12
- - name: relpath
13
- dtype: string
14
- splits:
15
- - name: all
16
- num_bytes: 5939257990
17
- num_examples: 7436
18
- download_size: 5733614565
19
- dataset_size: 5939257990
20
- configs:
21
- - config_name: default
22
- data_files:
23
- - split: all
24
- path: data/all-*
25
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ dataset_info:
3
+ features:
4
+ - name: image
5
+ dtype: image
6
+ - name: mask
7
+ dtype: image
8
+ - name: scene
9
+ dtype: string
10
+ - name: filename
11
+ dtype: string
12
+ - name: relpath
13
+ dtype: string
14
+ splits:
15
+ - name: all
16
+ num_bytes: 5939257990
17
+ num_examples: 7436
18
+ download_size: 5733614565
19
+ dataset_size: 5939257990
20
+ configs:
21
+ - config_name: default
22
+ data_files:
23
+ - split: all
24
+ path: data/all-*
25
+ task_categories:
26
+ - fill-mask
27
+ language:
28
+ - en
29
+ size_categories:
30
+ - 1B<n<10B
31
+ ---
32
+ # RUGD (Unofficial Mirror)
33
+
34
+ > **Unofficial mirror of the RUGD dataset. I am *not* an author or owner of RUGD. All credit to the original creators.**
35
+
36
+ The **RUGD** dataset contains outdoor, unstructured environments for autonomous navigation and visual perception research (e.g., semantic segmentation and scene understanding).
37
+
38
+ This repository re-hosts the original files **as a convenience mirror** for researchers who may have trouble accessing the official hosting due to connectivity or browser warnings.
39
+ **Note on the official site:** the original download page has, at times, been served over plain HTTP or with configurations that some browsers flag as “Not secure.” This mirror is provided to make access easier; please prefer the official source when it works for you.
40
+
41
  ---
42
+
43
+ ## Attribution & Citation
44
+
45
+ If you use this dataset, **please cite the original paper**:
46
+
47
+ ```bibtex
48
+ @inproceedings{RUGD2019IROS,
49
+ author = {Wigness, Maggie and Eum, Sungmin and Rogers, John G and Han, David and Kwon, Heesung},
50
+ title = {A RUGD Dataset for Autonomous Navigation and Visual Perception in Unstructured Outdoor Environments},
51
+ booktitle = {International Conference on Intelligent Robots and Systems (IROS)},
52
+ year = {2019}
53
+ }
54
+ ```
55
+
56
+ **Authors:** Maggie Wigness, Sungmin Eum, John G. Rogers, David Han, Heesung Kwon
57
+ **Conference:** IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2019
58
+
 
 
 
 
 
 
59
  ---
60
+
61
+ ## What’s in this mirror
62
+
63
+ * Original RUGD archives/directories re-hosted for convenience.
64
+ * File names and structure preserved where possible.
65
+ * No modifications to the data itself.
66
+
67
+ > If you’re an RUGD maintainer and would like this mirror updated or removed, please open a Discussion on this repo or contact me and I will promptly comply.
68
+
69
+ ---
70
+
71
+ ## License & Usage
72
+
73
+ This mirror **does not grant any license**. Usage is governed by the **original RUGD terms**. Before using the data, **review and comply with the upstream license/terms and any restrictions (e.g., research/non-commercial)**. If any license file was included upstream, it is mirrored here unchanged.
74
+
75
+ ---
76
+
77
+ ## How to load (examples)
78
+
79
+ ### Option A — Use 🤗 `imagefolder`
80
+
81
+ If this repo is organized as folders of images (and optional labels), you can leverage the generic loader:
82
+
83
+ ```python
84
+ from datasets import load_dataset
85
+
86
+ # Replace "your-username/rugd" with the actual repo name
87
+ ds = load_dataset(
88
+ "imagefolder",
89
+ data_dir="hf://datasets/your-username/rugd",
90
+ split="train" # or "validation"/"test" if you provide splits
91
+ )
92
+
93
+ # Access a sample
94
+ sample = ds[0]
95
+ image = sample["image"] # PIL.Image
96
+ ```
97
+
98
+ If you provide segmentation masks in a parallel folder (e.g., `images/` and `annotations/`), consider adding a small dataset script or a `DatasetDict` mapping so users get `{"image": ..., "label": ...}` pairs directly.
99
+
100
+ ### Option B — Stream raw files
101
+
102
+ If you just want to iterate over files as artifacts:
103
+
104
+ ```python
105
+ from huggingface_hub import hf_hub_download, list_repo_files
106
+
107
+ repo_id = "your-username/rugd"
108
+ files = list_repo_files(repo_id=repo_id, repo_type="dataset")
109
+
110
+ # Download a specific file (example)
111
+ path = hf_hub_download(repo_id=repo_id, repo_type="dataset", filename=files[0])
112
+ ```
113
+
114
+ ---
115
+
116
+ ## Intended tasks (non-exhaustive)
117
+
118
+ * **Semantic Segmentation**
119
+ * **Scene Understanding / Traversability**
120
+ * **Perception for UGV/Off-road Robotics**
121
+
122
+ > For class definitions, color maps, and evaluation protocols, please refer to the original RUGD documentation/paper.
123
+
124
+ ---
125
+
126
+ ## Provenance & Integrity
127
+
128
+ * Source: Official RUGD release (re-hosted).
129
+ * Files are mirrored “as-is.” If upstream checksums are available, you should verify them against the mirrored content.
130
+
131
+ ---
132
+
133
+ ## Acknowledgements
134
+
135
+ All credit and thanks to the RUGD authors and their institutions for creating and releasing this dataset to the community.
136
+
137
+ ---
138
+
139
+ ## Contact / Takedown
140
+
141
+ If you represent the original authors/rights holders and want this mirror changed or removed, please open a Discussion here or contact me directly and I��ll address it quickly.
142
+
143
+ ---
144
+
145
+ **Disclaimer:** This repository is **only a mirror** to improve accessibility for researchers. I make **no claim of authorship** and **no warranty** about the content. Always consult and follow the original dataset’s terms.