File size: 4,997 Bytes
68fa4ef
aa3e601
68fa4ef
 
 
fbb6998
 
a40eef3
fbb6998
 
 
 
 
a40eef3
68fa4ef
 
 
21214a8
68fa4ef
 
a40eef3
fbb6998
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a40eef3
 
4f0be60
21214a8
a40eef3
 
 
21214a8
4f0be60
21214a8
 
a40eef3
21214a8
 
fbb6998
21214a8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
68fa4ef
fbb6998
 
 
 
 
 
 
 
 
 
4f0be60
68fa4ef
 
 
fbb6998
68fa4ef
a40eef3
21214a8
a40eef3
 
68fa4ef
 
 
 
a40eef3
 
 
68fa4ef
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
# Your leaderboard name
TITLE = """<h1 align="center" id="space-title">GridNet-HD leaderboard</h1>"""

# What does your leaderboard evaluate?
INTRODUCTION_TEXT = """
This public Hugging Face Leaderboard evaluates the effectiveness of LiDAR–image fusion methods on the [GridNet-HD dataset](https://huggingface.co/datasets/heig-vd-geo/GridNet-HD)  for 3D semantic segmentation of power line infrastructure.
The dataset is associated with the following paper:

> **Title**: GridNet-HD: A High-Resolution Multi-Modal Dataset for LiDAR-Image Fusion on Power Line Infrastructure

> **Authors**: Masked for instance

> **Conference**: Submitted to NeurIPS 2025 

"""

# Which evaluations are you running? how can people reproduce what you have?
BENCHMARKS_TEXT = """
## How it works

Please respect the files structure from the [GridNet-HD repository](https://huggingface.co/datasets/heig-vd-geo/GridNet-HD) 

```
dataset-root/
β”œβ”€β”€ t1z5b/
β”‚   β”œβ”€β”€ images/           # RGB images (.JPG)
β”‚   β”œβ”€β”€ masks/            # Semantic segmentation masks (.png, single-channel label)
β”‚   β”œβ”€β”€ lidar/            # LiDAR point cloud (.las format with field "ground_truth")
β”‚   └── pose/             # Camera poses and intrinsics (text files)
β”œβ”€β”€ t1z6a/
β”‚   β”œβ”€β”€ ...
β”œβ”€β”€ ...
β”œβ”€β”€ split.json            # JSON file specifying the train/test split
└── README.md
```

Available test areas are listed in `split.json` : t1z4, t1z5a, t1z7, t3z1, t3z2, t3z5, t5a2, t6z1, t6z5

For example, you can run SPT baseline as described in [SPT_GridNet-HD_baseline#usage-examples](https://huggingface.co/heig-vd-geo/SPT_GridNet-HD_baseline#usage-examples) :

```python
python inference.py --mode inference --split test --weights path/to/model.ckpt --root_dir /path/to/data/gridnet/raw
```

Then, resulting las file with your added classification field must be converted to NPZ by providing only the classification term, like : 

```python
def create_npz(field_name:str="classification", root_dir:str="GridNet-HD"):

    # Iterate over each area directory
    for area_name in ["t1z4", "t1z5a", "t1z7", "t3z1", "t3z2", "t3z5", "t5a2", "t6z1", "t6z5"]:
        area_path = os.path.join(root_dir, area_name)
        lidar_path = os.path.join(area_path, "lidar")

        # Check if the lidar directory exists
        if not os.path.isdir(lidar_path):
            continue

        # Find the LAS file in the lidar directory
        las_files = [f for f in os.listdir(lidar_path) if f.lower().endswith(".las")]
        if not las_files:
            print(f"No LAS file found in {lidar_path}")
            continue

        las_file_path = os.path.join(lidar_path, las_files[0])
        print(las_file_path)
        # Read the LAS file
        try:
            las = laspy.read(las_file_path)
        except Exception as e:
            print(f"Error reading {las_file_path}: {e}")
            continue

        if field_name not in las.point_format.dimension_names:
            print(f"Error: field '{field_name}' not found in LAS file!")
            continue

        # Extract the ground_truth data
        classif_data = las[field_name].astype(np.uint8)
        las = None

        # Save the ground_truth data as a compressed .npz file
        npz_filename = f"{area_name}.npz"
        npz_path = os.path.join(root_dir, "npz", field_name, npz_filename)
        # ensure dir exist
        os.makedirs(os.path.dirname(npz_path), exist_ok=True)
        np.savez_compressed(npz_path, data=classif_data.astype(np.uint8))
```

Resulting NPZ files can be uploaded to the leaderboard using our `Submit Eval` form.


## How to reproduce our results

Please follow instructions on dedicated git repository for models running on this dataset: 
- Baseline based on image segmentation and reprojection into LiDAR: [ImageVote baseline](https://huggingface.co/heig-vd-geo/ImageVote_GridNet-HD_baseline)
- Baseline based on LiDAR 3D segmentation directly using Superpoint Trasnformer (SPT): [SPT baseline](https://huggingface.co/heig-vd-geo/SPT_GridNet-HD_baseline)
- Baseline based on late fusion between softmax logits from SPT and ImageVote: [LateFusionMLP baseline](https://huggingface.co/heig-vd-geo/LateFusionMLP_GridNet-HD_baseline)


"""

EVALUATION_QUEUE_TEXT = """
## Some good practices before submitting your results to the leaderboard.

Make sure you convert your LAS files to NPZ before submitting them to the leaderboard. You can use the function `create_npz` in the About section to do this.

You can upload one or several NPZ files for each area of the Test dataset. The leaderboard will compare your results with the ground truth data for each area.
You must keep points in the same order inside each NPZ file.
"""

CITATION_BUTTON_LABEL = "Copy the following snippet to cite these results"
CITATION_BUTTON_TEXT = r"""
GridNet-HD: A High-Resolution Multi-Modal Dataset for LiDAR-Image Fusion on Power Line Infrastructure
Masked Authors
Submitted to NeurIPS 2025.
"""