Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -52,6 +52,29 @@ All datasets are classified into seven distinct domains by their source: **Energ
|
|
52 |
|
53 |
See the [paper](https://arxiv.org/abs/2402.02368) and [codebase](https://github.com/thuml/Large-Time-Series-Model) for more information.
|
54 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
55 |
## Dataset detailed descriptions.
|
56 |
|
57 |
We analyze each dataset, examining the time series through the lenses of stationarity and forecastability to allow us to characterize the level of complexity inherent to each dataset.
|
@@ -105,28 +128,6 @@ UTSD is constructed with hierarchical capacities, namely **UTSD-1G, UTSD-2G, UTS
|
|
105 |
<img src="./figures/utsd_complexity.png" alt="" align=center />
|
106 |
</p>
|
107 |
|
108 |
-
## Usage
|
109 |
-
|
110 |
-
You can access and load UTSD based on [the code in our repo](https://github.com/thuml/Large-Time-Series-Model/tree/main/scripts/UTSD).
|
111 |
-
|
112 |
-
```bash
|
113 |
-
# huggingface-cli login
|
114 |
-
# export HF_ENDPOINT=https://hf-mirror.com
|
115 |
-
|
116 |
-
python ./scripts/UTSD/download_dataset.py
|
117 |
-
|
118 |
-
# dataloader
|
119 |
-
python ./scripts/UTSD/utsdataset.py
|
120 |
-
```
|
121 |
-
|
122 |
-
It should be noted that due to the construction of our dataset with diverse lengths, the sequence lengths of different samples vary. You can construct the data organization logic according to your own needs.
|
123 |
-
|
124 |
-
In addition, we provide code `dataset_evaluation.py` for evaluating time series datasets, which you can use to evaluate your Huggingface formatted dataset. The usage of this script is as follows:
|
125 |
-
|
126 |
-
```bash
|
127 |
-
python dataset_evaluation.py --root_path <dataset root path> --log_path <output log path>
|
128 |
-
```
|
129 |
-
|
130 |
## Acknowledgments
|
131 |
|
132 |
UTSD is mostly built from the Internet public time series dataset, which comes from different research teams and providers. We sincerely thank all individuals and organizations who have contributed the data. Without their generous sharing, this dataset would not have existed.
|
|
|
52 |
|
53 |
See the [paper](https://arxiv.org/abs/2402.02368) and [codebase](https://github.com/thuml/Large-Time-Series-Model) for more information.
|
54 |
|
55 |
+
|
56 |
+
## Usage
|
57 |
+
|
58 |
+
You can access and load UTSD based on this [GitHub](https://github.com/thuml/Large-Time-Series-Model).
|
59 |
+
|
60 |
+
```bash
|
61 |
+
# huggingface-cli login
|
62 |
+
# export HF_ENDPOINT=https://hf-mirror.com
|
63 |
+
|
64 |
+
python ./scripts/UTSD/download_dataset.py
|
65 |
+
|
66 |
+
# dataloader
|
67 |
+
python ./scripts/UTSD/utsdataset.py
|
68 |
+
```
|
69 |
+
|
70 |
+
It should be noted that due to the construction of our dataset with diverse lengths, the sequence lengths of different samples vary. You can construct the data organization logic according to your own needs.
|
71 |
+
|
72 |
+
In addition, we provide code `dataset_evaluation.py` for evaluating time series datasets, which you can use to evaluate your Huggingface formatted dataset. The usage of this script is as follows:
|
73 |
+
|
74 |
+
```bash
|
75 |
+
python ./scripts/UTSD/dataset_evaluation.py --root_path <dataset root path> --log_path <output log path>
|
76 |
+
```
|
77 |
+
|
78 |
## Dataset detailed descriptions.
|
79 |
|
80 |
We analyze each dataset, examining the time series through the lenses of stationarity and forecastability to allow us to characterize the level of complexity inherent to each dataset.
|
|
|
128 |
<img src="./figures/utsd_complexity.png" alt="" align=center />
|
129 |
</p>
|
130 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
131 |
## Acknowledgments
|
132 |
|
133 |
UTSD is mostly built from the Internet public time series dataset, which comes from different research teams and providers. We sincerely thank all individuals and organizations who have contributed the data. Without their generous sharing, this dataset would not have existed.
|