GridNet-HD leaderboard
This public Hugging Face Leaderboard evaluates the effectiveness of LiDARβimage fusion methods on the GridNet-HD dataset for 3D semantic segmentation of power line infrastructure. The dataset is associated with the following paper:
Title: GridNet-HD: A High-Resolution Multi-Modal Dataset for LiDAR-Image Fusion on Power Line Infrastructure
Authors: Masked for instance
Conference: Submitted to NeurIPS 2025
- "headers": [
- "Eval Name",
- "Result Name",
- "Submission Date",
- "mIoU β¬οΈ",
- "Accuracy β¬οΈ"
- "data": [
- [
- "HEIG-VD_MLP_LateFusion_GridNet-HD_Baseline",
- "MLP_LateFusion_GridNet-HD_Baseline",
- "2025-05-22T11:42:53Z",
- 74.22,
- 84.34
- [
- "HEIG-VD_ImageVote_GridNet-HD_Baseline",
- "ImageVote_GridNet-HD_Baseline",
- "2025-05-22T07:14:30Z",
- 69.11,
- 86.89
- [
- "HEIG-VD_SPT_GridNet-HD_Baseline",
- "SPT_GridNet-HD_Baseline",
- "2025-05-22T10:12:32Z",
- 66.91,
- 86.82
- [
- "metadata": null
How it works
Please respect the files structure from the GridNet-HD repository
dataset-root/
βββ t1z5b/
β βββ images/ # RGB images (.JPG)
β βββ masks/ # Semantic segmentation masks (.png, single-channel label)
β βββ lidar/ # LiDAR point cloud (.las format with field "ground_truth")
β βββ pose/ # Camera poses and intrinsics (text files)
βββ t1z6a/
β βββ ...
βββ ...
βββ split.json # JSON file specifying the train/test split
βββ README.md
Available test areas are listed in split.json
: t1z4, t1z5a, t1z7, t3z1, t3z2, t3z5, t5a2, t6z1, t6z5
For example, you can run SPT baseline as described in SPT_GridNet-HD_baseline#usage-examples :
python inference.py --mode inference --split test --weights path/to/model.ckpt --root_dir /path/to/data/gridnet/raw
Then, resulting las file with your added classification field must be converted to NPZ by providing only the classification term, like :
def create_npz(field_name:str="classification", root_dir:str="GridNet-HD"):
# Iterate over each area directory
for area_name in ["t1z4", "t1z5a", "t1z7", "t3z1", "t3z2", "t3z5", "t5a2", "t6z1", "t6z5"]:
area_path = os.path.join(root_dir, area_name)
lidar_path = os.path.join(area_path, "lidar")
# Check if the lidar directory exists
if not os.path.isdir(lidar_path):
continue
# Find the LAS file in the lidar directory
las_files = [f for f in os.listdir(lidar_path) if f.lower().endswith(".las")]
if not las_files:
print(f"No LAS file found in {lidar_path}")
continue
las_file_path = os.path.join(lidar_path, las_files[0])
print(las_file_path)
# Read the LAS file
try:
las = laspy.read(las_file_path)
except Exception as e:
print(f"Error reading {las_file_path}: {e}")
continue
if field_name not in las.point_format.dimension_names:
print(f"Error: field '{field_name}' not found in LAS file!")
continue
# Extract the ground_truth data
classif_data = las[field_name].astype(np.uint8)
las = None
# Save the ground_truth data as a compressed .npz file
npz_filename = f"{area_name}.npz"
npz_path = os.path.join(root_dir, "npz", field_name, npz_filename)
# ensure dir exist
os.makedirs(os.path.dirname(npz_path), exist_ok=True)
np.savez_compressed(npz_path, data=classif_data.astype(np.uint8))
Resulting NPZ files can be uploaded to the leaderboard using our Submit Eval
form.
How to reproduce our results
Please follow instructions on dedicated git repository for models running on this dataset:
- Baseline based on image segmentation and reprojection into LiDAR: ImageVote baseline
- Baseline based on LiDAR 3D segmentation directly using Superpoint Trasnformer (SPT): SPT baseline
- Baseline based on late fusion between softmax logits from SPT and ImageVote: LateFusionMLP baseline
Some good practices before submitting your results to the leaderboard.
Make sure you convert your LAS files to NPZ before submitting them to the leaderboard. You can use the function create_npz
in the About section to do this.
You can upload one or several NPZ files for each area of the Test dataset. The leaderboard will compare your results with the ground truth data for each area. You must keep points in the same order inside each NPZ file.