Loading

[ICRA2022] General Place Recognition: Visual Terrain Relative Navigation

Localization, SLAM, Place Recognition

5000 Prize Money
8068
137
23
716

[ICRA2022] General Place Recognition for Large-scale SLAM

challenge 2 --- visual 2D-2D Localization.

🌐 Website: http://gprcompetition.com

πŸ† SDK in Github: https://github.com/MetaSLAM/GPR_Competition

Note: Our timely updates will appear in our website and github.


Overview

Update 2022-06-21: The Round 2 competition dataset is now available!

 

The ability of mobile robots to recognize previously visited or mapped areas is essential to achieving reliable autonomous systems. Place recognition for localization and SLAM is a promising method for achieving this ability. However, current approaches are negatively affected by differences in viewpoint and environmental conditions that affect visual appearance (e.g. illumination, season, time of day), and so these methods struggle to provide continuous localization in environments that change over time. These issues are compounded when trying to localize in large-scale (several-kilometer length) maps, where the effects of repeated terrain and geometries result in greater localization uncertainty. This competition aims to push visual and LiDAR state-of-the-art techniques for localization in large-scale environments, with an emphasis on environments with changing conditions.

To evaluate localization performance under long-term and large-scale tasks, we provide benchmark datasets to investigate robust re-localization under changing viewpoints (orientation and translation) and environmental conditions (illumination and time of day). This competition will include two challenges. The winner of each challenge will receive 3000 USD while the runner-up will receive 2000 USD.

This is the second challenge for Visual 2D-2D Localization. If you are interested first challenge for Large-scale 3D Localization (3D-3D Localization), please pay a visit in [here]

Software Development Kit (SDK)

To accelerate the process of development, we provide a complete SDK for loading datasets and evaluating results. Python APIs are given so it would be convenient for participants to integrate and use the interfaces in their code, which include the API to:

  • Easy access to our datasets;
  • Evaluate metrics for visual/Lidar place recognition;
  • Submit results for online evaluation at crowdAI;

With this SDK, participants just need to focus on their algorithms and try to improve the place recognition accuracy. The SDK is held in the Github repo: https://github.com/MetaSLAM/GPR_Competition

Dataset

This dataset focuses on visual place recognition over a large-scale trajectory. The trajectory of interest is a 150km long flight from Ohio to Pittsburgh using a helicopter with a nadir-facing high resolution camera. The trajectory includes several types of environments of varying difficulty, including urban/suburban, forested, rural, and other natural terrain.

Part of the difficulty of this challenge involves being able to correctly match the inference imagery to the reference map imagery taken several years prior. We captured this flight in August 2017, and we include georeferenced satellite imagery from 2012.

Ground truth positions of the flight were collected using a NovAtel's SPAN GPS+INS, with submeter level accuracy.

In this dataset, we include:

  • High resolution (compressed to 500x500) helicopter imagery, captured at 20fps. Timestamps are synchronized with the rest of the system.
  • Paired reference satellite image for each helicopter frame.
  • IMU (scalar last quaternion, in ECEF reference frame)
  • Global positions (UTM coordinates).

 

You can download the dataset using these links: [Dropbox] or [η™ΎεΊ¦δΊ‘η›˜](提取码:qghd).

 

ROUND 2 dataset available here: [Dropbox] or [η™ΎεΊ¦δΊ‘η›˜](提取码:qghd).

 

File structure:

GPR
β”œβ”€β”€ Test --------------------------> evaluation set for submission
β”‚   β”œβ”€β”€ 000000.png ----------------> test image
β”‚   β”œβ”€β”€ 000001.png
β”‚   .
β”‚   .
β”‚   └── 004208.png
β”œβ”€β”€ Train -------------------------> training set
β”‚   β”œβ”€β”€ query_images
β”‚   β”‚   β”œβ”€β”€ 000001.png -------------> query image
β”‚   β”‚   .
β”‚   β”‚   .
β”‚   β”‚   └── 010435.png
β”‚   β”œβ”€β”€ reference_images

β”‚   β”‚   β”œβ”€β”€ offset_0_None

β”‚   β”‚   β”‚   β”œβ”€β”€ 000001.png -------------> reference image
β”‚   β”‚   β”‚   β”œβ”€β”€ .
β”‚   β”‚   β”‚   β”œβ”€β”€ .

β”‚   β”‚   β”‚   β”œβ”€β”€ 002852.png
β”‚   β”‚   β”œβ”€β”€ offset_20_North -------------> offset reference images

β”‚   β”‚   β”œβ”€β”€ offset_20_South

β”‚   β”‚   β”œβ”€β”€ offset_40_North

β”‚   β”‚   └── offset_40_South

β”‚   β”œβ”€β”€ gt_matches.csv -------------> ground truth query to reference mappings

β”‚   β”œβ”€β”€ query.csv -------------> query image information

β”‚   β””── reference.csv -------------> reference image information
└──Val -------------------------> validation set

     β”œβ”€β”€ query_images
     β”‚   β”œβ”€β”€ 000001.png -------------> query image
     β”‚   .
     β”‚   .
     β”‚   └── 001683.png
     β”œβ”€β”€ reference_images

     β”‚   β”œβ”€β”€ offset_0_None

     β”‚   β”‚   β”œβ”€β”€ 000001.png -------------> reference image
     β”‚   β”‚   β”œβ”€β”€ .
     β”‚   β”‚   β”œβ”€β”€ .

     β”‚   β”‚   β”œβ”€β”€ 000458.png
     β”‚   β”œβ”€β”€ offset_20_North -------------> offset reference images

     β”‚   β”œβ”€β”€ offset_20_South

     β”‚   β”œβ”€β”€ offset_40_North

     β”‚   └── offset_40_South

     β”œβ”€β”€ gt_matches.csv -------------> ground truth query to reference mappings

     β”œβ”€β”€ query.csv -------------> query image information

     └── reference.csv -------------> reference image information

 

Submission

After you download the testing data, you will see the file Test.zip. It contains both the reference and query images, but in a mixed up order so the competitors will not know the their true relation. What you need to do is compute the features of each image using your method, and representing them in a (num_images, feature_dim) numpy.ndarray (the order of features should be exactly the same as the images ). Save it as a *.npy file and upload this file.

New for round 2: your feature size must not be bigger than 2048.