Loading
Round 1: Completed Round 2: 20 days left #computer_vision #localization #slam

[ICRA2022] General Place Recognition: City-scale UGV Localization

Localization, SLAM, Place Recognition, Visual Navigation, Loop Closure Detection

$5000 Prize Money
4230
123
20
421

[ICRA2022] General Place Recognition for Large-scale SLAM

challenge 1 --- Large-scale 3D Localization (3D-3D Localization).

🌐 Website: http://gprcompetition.com

πŸ† SDK in Github: https://github.com/MetaSLAM/GPR_Competition

Note: Our timely updates will appear in our website and github.


Overview

The ability of mobile robots to recognize previously visited or mapped areas is essential to achieving reliable autonomous systems. Place recognition for localization and SLAM is a promising method for achieving this ability. However, current approaches are negatively affected by differences in viewpoint and environmental conditions that affect visual appearance (e.g. illumination, season, time of day), and so these methods struggle to provide continuous localization in environments that change over time. These issues are compounded when trying to localize in large-scale (several-kilometer length) maps, where the effects of repeated terrain and geometries result in greater localization uncertainty. This competition aims to push visual and LiDAR state-of-the-art techniques for localization in large-scale environments, with an emphasis on environments with changing conditions.

To evaluate localization performance under long-term and large-scale tasks, we provide benchmark datasets to investigate robust re-localization under changing viewpoints (orientation and translation) and environmental conditions (illumination and time of day). This competition will include two challenges. The winner of each challenge will receive 3000 USD while the runner-up will receive 2000 USD.

This is the first challenge for Large-scale 3D Localization (3D-3D Localization). If you are interested our second challenge of Visual 2D-2D Localization, please pay a visit in [here].

Software Development Kit (SDK)

To accelerate the process of development, we provide a complete SDK for loading datasets and evaluating results. Python APIs are given so it would be convenient for participants to integrate and use the interfaces in their code, which include the API to:

  • Easy access to our datasets;
  • Evaluate metrics for visual/Lidar place recognition;
  • Submit results for online evaluation at crowdAI;

With this SDK, participants just need to focus on their algorithms and try to improve the place recognition accuracy. The SDK is held in the Github repo: https://github.com/MetaSLAM/GPR_Competition

Dataset

This Pittsburgh City-scale Dataset concentrates on the LiDAR place recognition over a large-scale area within urban environment. We collected 55 vehicle trajectories covering partial of the Pittsburgh and thus including diverse enviroments. Each trajectory is at least overlapped at one junction with the others, and some trajectories even have multiple junctions. This feature enables the dataset to be used in tasks such as LiDAR place recognition and multi-map fusion.

The original dataset contains point clouds and GPS data. We generate ground truth poses by SLAM, which is fused with the GPS data and later optimized by Interactive SLAM. With this process, we also get the map of each trajectory. For convenience, we slice the map along the trajectory into several submaps, and a submap has size 50m*50m with the distance between every two submaps being 2m. The global 6DoF ground truth pose of each submap is also given, so that you can easily determine the distance relation of submaps.

Pittsburgh City-scale Dataset

In this dataset, we include:

  • Point cloud submaps (size 40m*40m, ground removed, downsampled to 4096 points, every 2m along the trajectory).
  • Ground truth poses of submaps (6DoF)

Both the training data and testing data can be downloaded [here]. We also provide sample data [here] for you to have a quick look into it. Our SDK can help you manage the data easier.

File structure:

    GPR
    β”œβ”€β”€ TEST --------------------------> evaluation set for submission
    β”‚   β”œβ”€β”€ 000000.pcd ----------------> test submap
    β”‚   β”œβ”€β”€ 000001.pcd
    β”‚   .
    β”‚   .
    β”‚   └── 005622.pcd
    β”œβ”€β”€ TRAIN -------------------------> training set
    β”‚   β”œβ”€β”€ train_1
    β”‚   β”‚   β”œβ”€β”€ 000001.pcd -------------> training submap
    β”‚   β”‚   β”œβ”€β”€ 000001_pose6d.npy ----> corresponding groundtruth
    β”‚   β”‚   .
    β”‚   β”‚   .
    β”‚   β”‚   β”œβ”€β”€ 001093.pcd
    β”‚   β”‚   └── 001093_pose6d.pcd
    β”‚   β”œβ”€β”€ train_2
    β”‚   β”œβ”€β”€ .
    β”‚   β”œβ”€β”€ .
    β”‚   └── train_15
    └── VAL ----------------------------> sample tracks for self evaluation
        β”œβ”€β”€ val_1
        β”‚   β”œβ”€β”€ DATABASE
        β”‚   β”‚   β”œβ”€β”€ 000001.pcd
        β”‚   β”‚   β”œβ”€β”€ 000001_pose6d.npy
        β”‚   β”‚   β”œβ”€β”€ .
        β”‚   β”‚   β”œβ”€β”€ .
        β”‚   β”‚   β”œβ”€β”€ 000164.pcd
        β”‚   β”‚   └── 000164_pose6d.npy
        β”‚   └── QUERY
        β”‚           β”œβ”€β”€ forward
        β”‚           β”‚     β”œβ”€β”€000001.pcd
        β”‚           β”‚     β”œβ”€β”€000001_pose6d.npy
        β”‚           β”‚     β”œβ”€β”€...
        β”‚           └── backward
        β”œβ”€β”€ val_2
        β”œβ”€β”€ val_3
        β”‚   β”œβ”€β”€ DATABASE
        β”‚   β”‚   β”œβ”€β”€ 000001.pcd
        β”‚   β”‚   β”œβ”€β”€ 000001_pose6d.npy
        β”‚   β”‚   β”œβ”€β”€ .
        β”‚   β”‚   β”œβ”€β”€ .
        β”‚   β”‚   β”œβ”€β”€ 000164.pcd
        β”‚   β”‚   └── 000164_pose6d.npy
        β”‚   └── QUERY
        β”‚           β”œβ”€β”€ rot_15
        β”‚           β”‚     β”œβ”€β”€000001.pcd
        β”‚           β”‚     β”œβ”€β”€000001_pose6d.npy
        β”‚           β”‚     β”œβ”€β”€...
        β”‚           β”œβ”€β”€ rot_30
        β”‚           └── rot_180
        β”œβ”€β”€ val_4
        β”œβ”€β”€ val_5
        └── val_6

After you download the testing data, you will see the file TEST_ROUND_2.tar.gz. It contains both the reference and query submaps, but in a mixed up order so the competitors will not know the their true relation. What you need to do is computing the feature of each submap using your method, and representing them in a (submap_num * feature_dim) numpy.ndarray(the order of features should be exactly the same as the submaps ). Save it as a *.npy file and upload this file.