Loading

๐Ÿ“ข Office Hour this Friday, 7 AM PST

๐ŸŽ‰ Stage 1 extended to 28th February

๐Ÿ“น Get started with the challenge: Walkthrough, submission process, and leaderboard

๐Ÿ‘ฅ Find teammates here

๐Ÿ“ Participate in the Community Contribution Prize here

๐Ÿš€ Fork the the Starter Kit here!

๐Ÿ—๏ธ Claim your training credits here

chat on Discord

๐Ÿ”ฅ Introduction

Welcome to the Learn-to-Race Autonomous Racing Virtual Challenge!

As autonomous technology approaches maturity, it is of paramount importance for autonomous vehicles to adheres to safety specifications, whether in urban driving or high-speed racing. Racing demands each vehicle to drive at its physical limits with little margin for safety, when any infraction could lead to catastrophic failures. Given this inherent tension, autonomous racing serves as a particularly challenging proving ground for safe learning algorithms.

The objective of the Learn-to-Race competition is to push the boundary of autonomous technology, with a focus on achieving the safety benefits of autonomous driving. In this competition, you will develop a reinforcement learning (RL) agent to drive as fast as possible, while adhering to the safety constraints. In Stage 1, participants will develop and evaluate their agents on Thruxton Circuit (top), which is included with the Learn-to-Race environment. In Stage 2, participants will be evaluated on an unseen track, the North Road Track at the Las Vegas Motor Speedway (bottom), with the opportunity to 'practice' with unfrozen model weights for a 1-hour prior to evaluation.

LVMS

LVMS

๐ŸŽฎ THE learn-to-Race Framework

Learn-to-Race is a open-source, Gym-compliant framework that leverages a high-fidelity racing simulator developed by Arrival. Arrival's simulator not only captures complex vehicle dynamcis and renders photorealistic views, but also plays a key role in bringing autonomous racing technology to real life in the Roborace series, the worldโ€™s first extreme competition of teams developing self-driving AI. Refer to learn-to-race.org to learn more. 

Learn-to-Race provides access to customizable, multi-model sensory inputs. One can accss RGB images from any specified location, semantic segmentation, and vehicle states (e.g. pose, velocity). During local development, the participants may use any of these inputs.  During evaluation, the agents will ONLY have access to speed and RGB images from cameras placed on the front, right, and left of the vehicle.

๐Ÿ‹๏ธ Competition Structure

FormAT

The Learn-to-Race challenge tests an agent's ability to execute the requisite behaviors for competition-style track racing, through multimodal perceptual input. The competition consists of 2 stages.

  • In Stage 1, participants will submit model checkpoints to AIcrowd for evaluation on Thruxton Circuit. The submissions will first be ranked on success rate, and then submissions with the same success rate will be ranked on average speed. Aside from Thruxton Circuit, additional race tracks are available in the Learn-to-Race environment for development.
  • The top 10 teams on the leader board will enter Stage 2, where their agents will be evaluated on an unseen track. The top-performing teams will submit their models (with initialization) to AIcrowd for training on the unseen track for a fixed period of one hour. During the one-hour โ€˜practiceโ€™ period, participants are free to perform any model updates or exploration strategies of their choice; the number of safety infractions will be accumulated under the consideration that an autonomous agent should remain safe throughout its interaction with the environment. After the โ€˜practiceโ€™ period, the agent will be evaluated on the unseen track. The participating teams will first be ranked on success rate, and then submissions with the same success rate will be ranked on a weighted sum of the number of safety infractions and the average speed.
  • To prevent the participants from achieving a high success rate by driving very slowly, we will set maximum episode length based on an average speed of 30km/h during evaluation.

Evaluation Metrics

  • Success Rate: Each race track will be partitioned into a fixed number of segments and the success rate is calculated as the number of successfully completed segments over the total number of segments. If the agent fails at a certain segment, it will respawn stationarily at the beginning of the next segment. If the agent successfully completes a segment, it will continue on to the next segment carrying over the current speed.
  • Average Speed: Average speed is defined as the total distance traveled over time, which is used as a proxy for performance.
  • The number of Safety Infractions: The number of safety infractions is accumulated during the 1-hour โ€˜practiceโ€™ period in Stage 2 of the competition. The agent is considered to have incurred a safety infraction if 2 wheels of the vehicle leave the drivable area, the vehicle collides with an object, or does not make sufficient progress (e.g. get stuck). In Learn-to-Race, the episode terminates upon a safety infraction.

๐Ÿ“œ Rules

Additionally, participants will be:

  • limited to 1 submission every 24 hours
  • only have access to speed and RGB images from cameras placed on the front, right, and left of the vehicle during evaluation
  • restricted from accessing model weights or custom logs during evaluation
  • required to submit source code, for top performers

๐Ÿš€ Getting Started

Please complete the following steps, in order to get started:

  • To obtain access to the autonomous racing simulator, go to the 'Resources' tab on the challenge page, and sign the license agreement that will allow you to download the simulator. (We suggest that you do this as soon as possible).
    • PI is the principal investigator. If you are a part of a team with a lead researcher, please fill in their information. Otherwise, itโ€™ll be your name.
  • Clone the official L2R starter kit, to obtain the Learn-to-Race training framework, baselines, and starter code templates.
  • Review the documentation, as well as additional notes/suggestions, for more information on installation, running agents, and evaluation.

Here is a summary of good material to get started with Learn2Race Competition:

Papers"

  • Learn-to-Race: A Multimodal Control Environment for Autonomous Racing, James Herman*, Jonathan Francis*, Siddha Ganju, Bingqing Chen, Anirudh Koul, Abhinav Gupta, Alexey Skabelkin, Ivan Zhukov, Max Kumskoy, Eric Nyberg ICCV 2021 [PDF] [Code]
  • Safety-aware Policy Optimisation for Autonomous Racing Bingqing Chen, Jonathan Francis, James Herman, Jean Oh, Eric Nyberg, Sylvia L. Herbert [PDF]

Video Instructions:

- Part 1: Downloading simulator, navigating code and making a submission: https://youtu.be/W6WdWrB10g4 
- Part 2: Challenge walkthrough, submission process and leaderboard: https://www.youtube.com/watch?v=pDBFr450aI0

Update 17/January/2021: Changes to the StarterKit and Code Documentation

The post concerns recent changes and patches made to the starter kit. These patches deal with recent issues that contestants were facing, regarding: stability, metrics calculations, and agent initialisation. Additionally, camera configuration interfaces were optimised for simplicity, and codebase documentation was updated and extended. Some changes included in this patch necessitate re-evaluation of previous submissions, which may affect leaderboard results.

Changelog:

  1. Simplified the camera interface code, for environment/simulator interaction.
  2. Added additional camera configurations for other sensors that are permitted for use during training.
  3. Resolved agent initialisation issues, related to yaw ambiguity; this corrects situations where agents respawned with incorrect orientation after failing track segments. Previously, this produced spurious results, where agents were assigned incorrect segment-completion metrics, during evaluation. This fix may affect leaderboard results.
  4. Provided additional agent tracking information, displayed on the console during training and evaluation.
  5. Revised code documentation, to incorporate recent inquiries:
    1. Environment description: https://learn-to-race.readthedocs.io/en/latest/env_overview.html
    2. Sensor configuration: https://learn-to-race.readthedocs.io/en/latest/sensors.html#creating-custom-sensor-configurations
    3. Getting started: https://learn-to-race.readthedocs.io/en/latest/getting_started.html 

We hope participants find these changes helpful!

Participants are strongly encouraged to incorporate these changes, as soon as possible. In order to do this, please initiate a merge request, from the upstream repository to your respective forked repositories. https://gitlab.aicrowd.com/learn-to-race/l2r-starter-kit

Claim your $50 training credits here.

๐Ÿ† Prizes

We are proud to have AWS sponsor generous prizes for this challenge!

The Learn to Race Challenge gives a special invitation to the Top 10 teams to collaborate, improve L2R, and jointly advance the field of research. Read below for more details on the prizes ๐Ÿ‘‡ 

Top 3 teams on the leaderboard will get 

  • $1,000 each
  • 1 DeepRacer car each

all the Top 10 teams on the leaderboard will get

  • Mentorship from the organizing committee and Safe Learning for Autonomous Driving Workshop to author papers, for submission to the given workshop at an AI conference.

Top 10 Community Contributors will get

  • $100 each. Read here for details on the Community Contribution Prize.

And lastly, every single team/participant that participates in the challenge will get

โฑ๏ธ Timeline

Stage 1

  • 6th December '21 - 28th February '22

Stage 2

  • 1st March '22 - 10th March '22

๐Ÿค– Team

  • Jonathan Francis (CMU)
  • Siddha Ganju (CMU alum)
  • Shravya Bhat (CMU)
  • Sidharth Kathpal (CMU)
  • Bingqing Chen (CMU)
  • James Herman (CMU alum)
  • Ivan Zhuko (Arrival)
  • Max Kumskoy (Arrival)
  • Jyotish P (AIcrowd)
  • Sharada Mohanty (AIcrowd)
  • Sahika Genc (AWS)
  • Cameron Peron (AWS)

๐Ÿค Sponsors

The Challenge is organized and hosted by AIcrowd, with the provision of challenge code, simulators, and challenge materials from faculty and students from Carnegie Mellon University, engineers from ARRIVAL Ltd., and engineers from AIcrowd. Third parties, such as Amazon Web Services, are providing sponsorship to cover running costs, prizes, and compute grants.

๐Ÿ“ฑ Contact

If you have any questions, please contact Jyotish P (jyotish@aicrowd.com), or consider posting on the Community Discussion board, or join the party on our Discord!