Loading
30.6k
535
46
734

๐ŸŽ‰ Stage 1 extended to 28th February

๐Ÿ“น Get started with the challenge: Walkthrough, submission process, and leaderboard

๐Ÿ‘ฅ Find teammates here

๐Ÿ“ Participate in the Community Contribution Prize here

๐Ÿš€ Fork the the Starter Kit here!

๐Ÿ—๏ธ Claim your training credits here

chat on Discord

๐Ÿ”ฅ Introduction

Welcome to the Learn-to-Race Autonomous Racing Virtual Challenge!

As autonomous technology approaches maturity, it is of paramount importance for autonomous vehicles to adheres to safety specifications, whether in urban driving or high-speed racing. Racing demands each vehicle to drive at its physical limits with little margin for safety, when any infraction could lead to catastrophic failures. Given this inherent tension, autonomous racing serves as a particularly challenging proving ground for safe learning algorithms.

The objective of the Learn-to-Race competition is to push the boundary of autonomous technology, with a focus on achieving the safety benefits of autonomous driving. In this competition, you will develop a reinforcement learning (RL) agent to drive as fast as possible, while adhering to the safety constraints. In Stage 1, participants will develop and evaluate their agents on Thruxton Circuit (top), which is included with the Learn-to-Race environment. In Stage 2, participants will be evaluated on an unseen track, the North Road Track at the Las Vegas Motor Speedway (bottom), with the opportunity to 'practice' with unfrozen model weights for a 1-hour prior to evaluation.

LVMS

LVMS

๐ŸŽฎ THE learn-to-Race Framework

Learn-to-Race is a open-source, Gym-compliant framework that leverages a high-fidelity racing simulator developed by Arrival. Arrival's simulator not only captures complex vehicle dynamcis and renders photorealistic views, but also plays a key role in bringing autonomous racing technology to real life in the Roborace series, the worldโ€™s first extreme competition of teams developing self-driving AI. Refer to learn-to-race.org to learn more. 

Learn-to-Race provides access to customizable, multi-model sensory inputs. One can accss RGB images from any specified location, semantic segmentation, and vehicle states (e.g. pose, velocity). During local development, the participants may use any of these inputs.  During evaluation, the agents will ONLY have access to speed and RGB images from cameras placed on the front, right, and left of the vehicle.

๐Ÿ‹๏ธ Competition Structure

FormAT

The Learn-to-Race challenge tests an agent's ability to execute the requisite behaviors for competition-style track racing, through multimodal perceptual input. The competition consists of 2 stages.

  • In Stage 1, participants will submit model checkpoints to AIcrowd for evaluation on Thruxton Circuit. The submissions will first be ranked on success rate, and then submissions with the same success rate will be ranked on average speed. Aside from Thruxton Circuit, additional race tracks are available in the Learn-to-Race environment for development.
  • The top 10 teams on the leader board will enter Stage 2, where their agents will be evaluated on an unseen track. The top-performing teams will submit their models (with initialization) to AIcrowd for training on the unseen track for a fixed period of one hour. During the one-hour โ€˜practiceโ€™ period, participants are free to perform any model updates or exploration strategies of their choice; the number of safety infractions will be accumulated under the consideration that an autonomous agent should remain safe throughout its interaction with the environment. After the โ€˜practiceโ€™ period, the agent will be evaluated on the unseen track. The participating teams will first be ranked on success rate, and then submissions with the same success rate will be ranked on a weighted sum of the number of safety infractions and the average speed.
    • Specifically, we will weigh the number of safety infractions and the average speed based on: \(x = {-b \pm \sqrt{b^2-4ac} \over 2a}\).
    • The max / median will be computed over the metrics from all Stage 2 participants.
  • To prevent the participants from achieving a high success rate by driving very slowly, we will set maximum episode length based on an average speed of 30km/h during evaluation. 

๐Ÿ“œ Rules

Additionally, participants will be:

  • limited to 5 submission every 24 hours
  • only have access to speed and RGB images from cameras placed on the front, right, and left of the vehicle during evaluation
  • restricted from accessing model weights or custom logs during evaluation
  • required to submit source code, for top performers

๐Ÿ“ Evaluation Metrics

Success Rate

  • Each race track will be partitioned into a fixed number of segments and the success rate is calculated as the number of successfully completed segments over the total number of segments.
  • If the agent fails at a certain segment, it will respawn stationarily at the beginning of the next segment.
  • If the agent successfully completes a segment, it will continue on to the next segment carrying over the current speed.
  • A higher success rate is better. 

Average Speed

  • Average speed is defined as the total distance travelled over time, which is used as a proxy for performance.
  • As this is Formula-style racing, higher speed is better.

The number of Safety Infractions

  • The number of safety infractions is accumulated during the 1-hour โ€˜practiceโ€™ period in Round 2 of the competition.
  • The agent is considered to have incurred a safety infraction if 2 wheels of the vehicle leave the drivable area, the vehicle collides with an object, or does not make sufficient progress (e.g. get stuck).
  • In Learn-to-Race, the episode terminates upon a safety infraction.
  • A smaller number of safety infractions is better, i.e. the agent is safer.

๐Ÿš€ Getting Started

Please complete the following steps, in order to get started:

  • To obtain access to the autonomous racing simulator, go to the 'Resources' tab on the challenge page, and sign the license agreement that will allow you to download the simulator. (We suggest that you do this as soon as possible).
    • PI is the principal investigator. If you are a part of a team with a lead researcher, please fill in their information. Otherwise, itโ€™ll be your name.
  • Clone the official L2R starter kit, to obtain the Learn-to-Race training framework, baselines, and starter code templates.
  • Review the documentation, as well as additional notes/suggestions, for more information on installation, running agents, and evaluation.

Here is a summary of good material to get started with Learn2Race Competition:

Papers:

  • Learn-to-Race: A Multimodal Control Environment for Autonomous Racing, James Herman*, Jonathan Francis*, Siddha Ganju, Bingqing Chen, Anirudh Koul, Abhinav Gupta, Alexey Skabelkin, Ivan Zhukov, Max Kumskoy, Eric Nyberg, ICCV 2021 [PDF] [Code]
  • Safe Autonomous Racing via Approximate Reachability on Ego-vision, Bingqing Chen, Jonathan Francis, James Herman, Jean Oh, Eric Nyberg, Sylvia L. Herbert [PDF]

Video Instructions:

- Part 1: Downloading simulator, navigating code and making a submission: https://youtu.be/W6WdWrB10g4 
- Part 2: Challenge walkthrough, submission process and leaderboard: https://www.youtube.com/watch?v=pDBFr450aI0

๐Ÿ“ข Update: 17-Jan-21

Changes to the StarterKit and Code Documentation

The post concerns recent changes and patches made to the starter kit. These patches deal with recent issues that contestants were facing, regarding: stability, metrics calculations, and agent initialisation. Additionally, camera configuration interfaces were optimised for simplicity, and codebase documentation was updated and extended. Some changes included in this patch necessitate re-evaluation of previous submissions, which may affect leaderboard results.The post concerns recent changes and patches made to the starter kit. These patches deal with recent issues that contestants were facing, regarding: stability, metrics calculations, and agent initialisation. Additionally, camera configuration interfaces were optimised for simplicity, and codebase documentation was updated and extended. Some changes included in this patch necessitate re-evaluation of previous submissions, which may affect leaderboard results.

Here is the changelog:

  1. Simplified the camera interface code, for environment/simulator interaction.
  2. Added additional camera configurations for other sensors that are permitted for use during training.
  3. Resolved agent initialisation issues, related to yaw ambiguity; this corrects situations where agents respawned with incorrect orientation after failing track segments. Previously, this produced spurious results, where agents were assigned incorrect segment-completion metrics, during evaluation. This fix may affect leaderboard results.
  4. Provided additional agent tracking information, displayed on the console during training and evaluation.
  5. Revised code documentation, to incorporate recent inquiries:
    1. Environment description: https://learn-to-race.readthedocs.io/en/latest/env_overview.html
    2. Sensor configuration: https://learn-to-race.readthedocs.io/en/latest/sensors.html#creating-custom-sensor-configurations
    3. Getting started: https://learn-to-race.readthedocs.io/en/latest/getting_started.html 

We hope participants find these changes helpful!

Participants are strongly encouraged to incorporate these changes, as soon as possible. In order to do this, please initiate a merge request, from the upstream repository to your respective forked repositories. https://gitlab.aicrowd.com/learn-to-race/l2r-starter-kit

Claim your $50 training credits here.

๐Ÿ† Prizes

We are proud to have AWS sponsor generous prizes for this challenge!

The Learn to Race Challenge gives a special invitation to the Top 10 teams to collaborate, improve L2R, and jointly advance the field of research. Read below for more details on the prizes ๐Ÿ‘‡ 

Top 3 teams on the leaderboard will get 

  • $1,000 worth of AWS credits each
  • 1 DeepRacer car each

all the Top 10 teams on the leaderboard will get

  • Mentorship from the organizing committee and Safe Learning for Autonomous Driving Workshop to author papers, for submission to the given workshop at an AI conference.

Top 10 Community Contributors will get

  • $100 worth of AWS credits each. Read here for details on the Community Contribution Prize.

And lastly, every single team/participant that participates in the challenge will get

โฑ๏ธ Timeline

Stage 1

  • 6th December '21 - 28th February '22
  • Code Review from 25th Feb to 4th March (The L2R competition organizers will review the code to confirm sensor inputs and correctness of model development) 

Stage 2

  • 4th March '22 - 14th March '22
  • Code review - 15th March '22 to 22nd March '22 

- Embargo Announcements (only to winners) - 23rd March '22. The L2R team will then work with the top 3 winning teams to curate their solutions for a presentation at the Aicrowd Townhall till 28th March '22
- Public Townhall for QA, Next steps (conference workshops) and Announce Winners - 1 April '22  

๐Ÿค– Team

  • Jonathan Francis (CMU)
  • Siddha Ganju (CMU alum)
  • Shravya Bhat (CMU)
  • Sidharth Kathpal (CMU)
  • Bingqing Chen (CMU)
  • James Herman (CMU alum)
  • Ivan Zhuko (Arrival)
  • Max Kumskoy (Arrival)
  • Jyotish P (AIcrowd)
  • Sharada Mohanty (AIcrowd)
  • Sahika Genc (AWS)
  • Cameron Peron (AWS)

๐Ÿค Sponsors

The Challenge is organized and hosted by AIcrowd, with the provision of challenge code, simulators, and challenge materials from faculty and students from Carnegie Mellon University, engineers from ARRIVAL Ltd., and engineers from AIcrowd. Third parties, such as Amazon Web Services, are providing sponsorship to cover running costs, prizes, and compute grants.

๐Ÿ“ฑ Contact

If you have any questions, please contact Jyotish P (jyotish@aicrowd.com), or consider posting on the Community Discussion board, or join the party on our Discord!