Loading

 

LEARN-TO-RACE COMPETITION OFFICIAL RULES

PLEASE READ THESE OFFICIAL RULES CAREFULLY. ENTRY INTO THIS COMPETITION CONSTITUTES YOUR ACCEPTANCE OF THESE OFFICIAL RULES. IF YOU DO NOT AGREE TO ANY PART OF THESE OFFICIAL RULES, PLEASE DO NOT ENTER THIS CHALLENGE.

NO PURCHASE IS NECESSARY TO ENTER OR WIN. A PURCHASE OF ANY KIND WILL NOT INCREASE YOUR CHANCES OF WINNING; VOID WHERE PROHIBITED.

1. Competition Description

The Learn-to-Race Autonomous Racing Virtual Challenge (hereafter referred to as “The Challenge”) is a competition, amongst a series of engineering teams, involving the development of software-based agents to compete in simulated autonomous racing environments.

Each team consists of contestants that will design and submit for evaluation one agent (of which there may be several iterations, thereof, over the course of the competition) which they will train or otherwise design using their own resources or resources provided to them via sponsorship.

2. Organizers and Helper Entities

The Challenge is organized by engineers from AIcrowd SA, EPFL Innovation Park, Bâtiment C, c/o Fondation EPFL Innovation Park, 1015 Lausanne, Switzerland; these entities will be, collectively, henceforth referred to as “The Organizer”. 

“Helper Entities” are any companies, individuals, or organizations authorized by The Organizer to aid them with the administration, sponsorship, or execution of The Challenge—including, but not limited to, AIcrowd SA, faculty, staff, and students from Carnegie Mellon University, engineers from ARRIVAL Ltd., and scientists and engineers from Amazon AWS.

3. Sponsors

Third parties, such as Amazon Web Services, may provide sponsorship to cover running costs, prizes, and compute grants. These third parties will be hereafter referred to as “The Sponsors”.

4. Entry

The Entry in this competition refers to a git repository on gitlab.aicrowd.com which includes:

  • Source code for running the submitted agent on the Learn-to-Race Environment
  • Any data locally required for running this source code, e.g., model weights
  • Specifications of the software runtime context
  • System description as specified in Section 12-b of the rules
  • Information provided during competition registration

In order to submit an entry to the competition, a representative of a participating team must create an account on AIcrowd and register for the competition on the AIcrowd Learn-to-Race Challenge page: 

https://www.aicrowd.com/challenges/learn-to-race-autonomous-racing-virtual-challenge

Registration for the challenge will require the representative to declare, on behalf of their team, that they are a team that is primarily consisting of non-industry researchers.

5. Competition Phases

The Challenge will be organized across two phases:

5-a. Development Phase ( Round 1 )

During the development phase:

  1. Participants will be able to submit agents to the evaluation service with a limit of 1 successful submission every 24 hours. The 24-hour submission limit windows will reset every day at 00:00 UTC.
  2. Submitted agents will be evaluated on a number of differently-seeded episodes, determined by The Organizer during the evaluation period. The metrics will be averaged over these episodes.
  3. Submissions made during the Development Phase will appear on the Competition Leaderboard and will count towards Test Phase qualification, but not count towards final competition ranking. 
  4. Participants are welcome to run the evaluation protocol locally, but the result of this evaluation will not appear on the leaderboard.

5-b. Test Phase ( Round 2 )

During the Test Phase:

  1. The top 10 participants from the development phase will be eligible to participate in the Test Phase. The Organizer reserves the right to extend the number of participants.
  2. Participants may submit up to three times during this entire phase, and the best results will be used for the final ranking. This is intended to give contestants a chance to deal with bugs or submission errors, gracefully.
  3. During the test phase, scores will be returned to the contestants upon completion of a submission, but rankings and the leaderboard will be kept secret until the announcement event, on 29 April 2022.
  4. In the test phase, submitted agents will be evaluated on an unseen track. Each submitted agent will have the opportunity to practice for a fixed period of one hour, during which the agent is free to perform any model updates or exploration. The number of safety infractions during the one-hour practice period will be accumulated as an evaluation metric. 
  5. After the one-hour practice, the submitted agents will be evaluated on three episodes.  The metrics will be averaged over the three episodes.

6. Competition Start and End Dates

 

Development Phase: 

December 6th 2021, 23:59:00 UTC –  15 February 2022, 23:59:00 UTC

Test Phase: 

15 February 2022, 23:59:00 UTC – 21 February 2022, 23:59:00 UTC

7. Competition Subtasks

Rankings will be performed on the basis of “passive” subtasks. Submissions are not made to a specific subtask, but rather to The Challenge, as a whole.  Submissions will be ranked in each and every track that they qualify for, based on the system description (see section 12-b). Note that all eligible submissions qualify for the first subtask, and any one of the additional subtasks. If there is any ambiguity as to whether a particular submission qualifies for a track, including whether a submission qualifies for any one of the additional subtasks beyond subtast #1, the decision will be made at the discretion of the Organizer.

The subtasks for this competition will be:

  1. Best overall agent, awarded to the best performing agent in the competition. All submitted agents qualify for this track.
  2. Best agent speed, awarded to the fastest agent in the competition, on the basis of average speed, conditioned on a success rate no less than 80%. All submitted agents qualify for this track.
  3. Best agent safety, awarded to the safest agent in the competition, on the basis of the total number of safety infractions during the one-hour practice period, conditioned on a success rate no less than 80%.

8. Agent Design

There is no restriction on how an agent is implemented, trained, or run except during evaluation, where:

  1. The participants may use any observation modality supported by the Learn-to-Race environment, when developing their agents. 
  2. In both the Development Phase and the Test Phase, the submitted agents will not have access to pose information, except for speed on the evaluation server. 
  3. The agent must act on the environment by proposing actions, according to the action space supported by the Learn-to-Race environment.
  4. The agent must not attempt to directly interact with the ARRIVAL Autonomous Racing Simulator binary, except through the OpenAI Gym-compliant API provided by the Learn-to-Race framework.
  5. During evaluation, the agent must not attempt to connect resources or other parties outside of the test environment (e.g., by connecting to a third-party controller). If the agent requires access to ``external resources'', a local copy should be incorporated in the agent and relied upon during evaluation.

8-a. Time and Turn limits

  1. To prevent the participants from achieving a high success rate by driving very slowly, the  maximum episode length will be set based on an average speed of 30km/h. The evaluation will terminate if the maximum episode length is reached and metrics will be computed based on performance up till that point.   
  2. Contestants can run the evaluation protocol for their agent locally with or without these constraints, to benchmark their agent's efficiency privately.

9. Local Evaluations

Code will be shared to allow participants to evaluate their agents against the testing protocol either locally or remotely, as well as perform integration tests to determine whether their code will run on the evaluation server.

10. Competition Environment

The environment for all evaluations will be based on the Learn-to-Race environment, provided in the Learn-to-Race public repository, instantiated with all default parameters and settings. Should the need arise, The Organizer reserves the right to make bug fixes and maintenance changes to the environment to ensure the smooth running of the competition. In such events, updates will be publicized, and the results currently available on the leaderboard will stand.

11. Am I Eligible to Enter the Challenge?

You are eligible to enter The Challenge if you (and each member of your team) meet all of the following requirements as of the time and date of entry:

  • You are an individual;
  • You are 18 years of age or older but in no event less than the age of majority in your place of residence;
  • You have Internet Access, an Email Account, and access to a personal computer;

The Helper Entities and Sponsors will not be able to transfer prize money to accounts of any of the following countries or regions. (Please note that residents of these countries or regions are still allowed to participate in the challenge and be ranked in the official rankings.)

  • The Crimea region of Ukraine
  • Cuba
  • Iran
  • North Korea
  • Sudan
  • Syria
  • Quebec, Canada
  • Brazil
  • Italy

Furthermore, teams involving one or more participants amongst the The Organizer may submit entries for the purpose of benchmarking and comparison, but such entries are not considered part of the competition for the purpose of the official rankings, and not eligible for Prizes.

Teams from institutions sponsoring the competition, excluding AIcrowd SA, are eligible to participate (subject to the individual conditions listed above) and appear in official rankings, but are not eligible for Prizes.

Please Note: it is entirely your responsibility to review and understand your employer’s and countries policies about your eligibility to participate in The Challenge. If you participate in violation of your employer’s or countries policies, you and your Entry may be disqualified from The Challenge. The Organizer disclaims any and all liability or responsibility with respect to disputes arising between an employer and such employer’s employee or between a country and its resident in relation to this matter.

12. Is the Entry an Eligible Entry?

To be eligible to be considered for a prize, as solely determined by The Organizer:

The Entry MUST:

  • be compatible with the official submission format;
  • Be self-contained and function without a dependency on any external services and network access
  • be in English;
  • be the Team’s own original work;
  • not have been submitted previously in any promotion of any kind;
  • not contain material or content that: is inappropriate, indecent, obscene, offensive, sexually explicit, pornographic, hateful, tortious, defamatory, or slanderous or libelous; or promotes bigotry, racism, hatred or harm against any group or individual or promotes discrimination based on race, gender, ethnicity, religion, nationality, disability, sexual orientation, or age; or promotes alcohol, illegal drugs, or tobacco; or violates or infringes another’s rights, including but not limited to rights of privacy, publicity, or their intellectual property rights; or is inconsistent with the message, brand, or image of Organizer, is unlawful; or is in violation of or contrary to the laws or regulations of any jurisdiction in which the Entry is created; and

The Team members MUST:

  • designate one person as the team leader who will be solely responsible for receiving communications from and communicating with Sponsor;
  • ensure the Team has obtained any and all consents, approvals, or licenses required for submission of the Entry;
  • obtain any consents necessary from all members of the Team with respect to the sharing of such member’s personal information as outlined herein;
  • obtain the agreement of all members of the Team to these Rules;
  • not generate the Entry by any means which violate these Rules, the Organizer Terms of Service or the Organizer Privacy Policy;
  • not engage in false, fraudulent, or deceptive acts at any phase during participation in the Challenge; and
  • not tamper or abuse any aspect of The Challenge.

The Team members MUST:

  • ensure the Team has obtained any and all consents, approvals, or licenses required for submission of the Entry;
  • obtain any consents necessary from all members of the Team with respect to the sharing of such member’s personal information as outlined herein;
  • obtain the agreement of all members of the Team to these Rules;
  • not generate the Entry by any means which violate these Rules, the Sponsor’s Terms of Service or the Sponsor’s Privacy Policy;
  • not engage in false, fraudulent, or deceptive acts at any phase during participation in the Challenge; and
  • not tamper or abuse any aspect of The Challenge.

 

12-a. Source Code Release

Participants are not required to release their source code to be ranked on the final leaderboard(s), but to be eligible for the Prizes, Participants are required to release the source code (including but not limited to training and inference code) of their solutions under an Open Source Foundation (OSF) approved license.

If a participating team does not receive the Prizes for the above-mentioned reason, the prizes will be offered to the next eligible team on the final leaderboard.

It is a requirement of entering into the competition that the source code for submissions during the test phase be privately shared with The Organizer and Helper Entities, to be used solely for the purpose of adjudication and checking for improper interactions between the agent and the environment.

Organizer reserves the right to disqualify a team if any improper interactions between the agent and the environment are found during the code inspection.

12-b. Systems description requirement

Participants submitting agents during the test phase will be required to submit a description of the system (training process, design, structure, etc.) using a free-form text entry field, but with guiding questions provided by The Organizer and Helper Entities. The amount of detail offered is up to the contestants, but The Organizer strongly encourages participants to be as precise and thorough as possible here. The system descriptions will be released at the end of the competition and will be used to categorize the system into tracks for the purpose of subtask-specific rankings. Some system descriptions may be used to support the writing of a publication to the expected Workshop on Safe Learning for Autonomous Driving, which subtask winners and select runners-up may be invited to co-author, at the discretion of The Organizing committee. 

13. Disqualification

  • If you, any Team member, or the Entry is found to be ineligible for any reason, including but not limited to conflicts within Teams and noncompliance with these Rules, The Organizer and Helper Entities reserve the right to disqualify the Entry and/or you and/or your Team members from this Challenge and any other contest or promotional activity sponsored or administered in any way by The Organizer.
  • A participant is not allowed to create more than one account to participate in The Challenge. Violating this will result in disqualification from The Challenge.
  • Participants should not attempt to get around the limited number of submissions during the test phase by entering several teams into the competition. Participants should only be associated with one Entry. If two teams have overlap in team members, or if The Organizer deems two entries to be effectively similar modulo small changes, they reserve the right to disqualify both teams.
  • If a participating team does not receive the Prizes for any of the above-mentioned reasons, the prizes will be offered to the next eligible team on the final leaderboard.

14. How may the Entry potentially be used?

The Entry may be used in a few different ways. The Organizer do not claim to own your Team’s Entry, however, by submitting the Entry you and each member of your Team:

  • hereby grants to The Organizer and Helper Entities a non-exclusive, irrevocable, royalty-free, world-wide right and license to review and analyze the Entry in relation to The Challenge;
  • hereby grants to Organizer and Helper Entities a non-exclusive, irrevocable, royalty-free, world-wide right and license to data generated from evaluation of the Entry, to be shared with used in a report submitted to any scientific publication conference venue after the competition has ended, and possibly released as an open-source dataset;
  • agrees that each member will execute any necessary paperwork for The Organizer and the Helper Entities to use the rights and licenses granted hereunder;
  • acknowledges and agrees that the Team will not be compensated and may not be credited (at The Organizer’s sole discretion) for the use of the Entry as described in these Rules;
  • acknowledges that The Organizer and Helper Entities may have developed or commissioned materials similar to the Entry and waive any claims resulting from any similarities to the Entry;
  • understand and acknowledge that, subject to provision of Prizes, The Organizer and Helper Entities are not obligated to use the Entry in any way, even if the Entry is selected as a winning Entry.

Personal data you submit in relation to The Challenge will be used by The Organizer in accordance with Section 20 of these Rules.

15. How will Winners be Selected and Notified?

15-a. Definitions

  • Safety Infraction: The agent is considered to have incurred a safety infraction if 2 wheels of the vehicle leave the drivable area, the vehicle collides with an object, or does not progress for a number of steps (i.e. stuck). The agent is considered having failed upon any safety infraction. 
  • Episode: One episode is defined as one lap around a race track, which consists of multiple non-overlapping segments. If the agent fails at a certain segment, it will respawn stationarily at the beginning of the next segment. If the agent successfully completes a segment, it will continue on to the next segment carrying over the current speed.

Metrics

The submitted agents will be evaluated on the metrics as defined here:

  • Success Rate is calculated as the number of successfully completed segments over the total number of segments in one episode. 
  • Average Speed is defined as the total distance traveled divided by total time.
  • Number of Safety Infractions (Test Phase ONLY) is the total number of safety infractions accumulated during the 1-hour ‘practice’ period in the Test Phase of the competition. 

15-b. Ranking

During evaluation, and for the purpose of ranking submissions, the following ranking mechanism will be used by The Organizer:

  • In the Development Phase, the submissions will first be ranked on success rate, hereby referred to as - "primary evaluation metric score". If two or more Entries have the same primary evaluation metric scores, they will be ranked based on average speed. The top 10 participants from the Development Phase will be eligible to participate in the Test Phase. If two or more Entries are tied with the 10th best-performing Entry on both evaluation metrics, then the said teams are also eligible to participate in the Test Phase.
  • In the Test Phase, the submissions will first be ranked on primary evaluation metric scores. If two or more Entries have the same primary evaluation metric scores, they will be ranked based on a weighted sum of the total number of safety infractions and the average speed, hereby referred to as - "secondary evaluation metric score".  
  • If two or more Entries are tied with both primary and secondary evaluation metric scores, then the prizes will be shared evenly among the said Teams.
  • Participants are encouraged, but in no way required, to incorporate this scoring mechanism into the training process of their agent (where relevant).

Potential winners will be contacted within two weeks of the advertised end of the test phase (Section 6) via the email associated with the aicrowd.com account through which the Entry was submitted and must submit their systems description at that time in the form and within the timeframe specified by The Organizer. If a potential winner (including each member of the potentially winning team) cannot be contacted, does not respond as directed, refuses the prize, or is found to be ineligible for any reason, such prize may be forfeited and awarded to an alternate winner. Only one alternate winner will be selected per each prize package, after which prizes will remain unawarded.

To the extent that there is any dispute as to the identity of the potential winner, the registered account holder of the email address associated with the AIcrowd account through which the Entry was first submitted will be deemed the official potential winner by The Organizer. A registered account holder is defined as the natural person who is assigned to an email address by an Internet access provider, online service provider, or other organization (e.g., business, educational institution, etc.) that is responsible for assigning email addresses for the domain associated with the submitted email address.

16. Your Odds of Winning

ODDS OF WINNING A PRIZE ARE SUBJECT TO THE TOTAL NUMBER OF ELIGIBLE ENTRIES RECEIVED AND HOW YOUR ENTRY SCORES IN ACCORDANCE WITH THE JUDGING CRITERIA.

17. Prizes

Prizes will be announced separately on the AIcrowd Learn-to-Race Challenge competition page and advertised via social media. Prizes will be fulfilled in a manner determined by The Organizer and may require winners to have a bank account to receive prize funds.

18. When will prizes be awarded?

The prizes will be awarded within a commercially reasonable time frame to the designated Team Leader unless otherwise agreed to by Team Leader, remaining Team members and Organizer. All members of a Team may be required to complete and sign additional documentation, such as non-disclosures, representations and warranties, liability and publicity releases (unless prohibited by applicable law), and tax documents, or other similar documentation in the manner and within the timeframe specified by Organizer in order for the potentially winning team to claim the prize. Neither The Organizer nor the Helper Entities will in any way be involved in any dispute with respect to receipt of a prize by any other members of a Team, including, without limitation, division of the prize value among Team members. Winners are responsible for any tax liability that may result from receipt of any prize.

Only prizes claimed in accordance with these Rules will be awarded.

19. Winner List

A list of all winners of this Challenge will be posted on AIcrowd Site and may be announced at The Organizer’s or Helper Entities’ discretion via The Organizer’s and Helper Entities’ Twitter, Facebook, Blog, or Website, or at an Organizer or Helper Entities sponsored or hosted event.

20. Your Personal Data and Privacy

The Organizer may use cookies and/or collect IP addresses for the purpose of implementing or exercising its rights or obligations under the Rules, for information purposes, identifying your location, including without limitation for the purpose of redirecting you to the appropriate geographic website, if applicable, or for any other lawful purpose in accordance with the AIcrowd Privacy Policy.

The Organizer may use the personal data you provide via your participation in this Challenge:

  • to contact you in relation to the Challenge;
  • to confirm the details of your Entry;
  • to administer and execute this Challenge, including sharing it with Helper Entities;
  • at The Organizer’s discretion, to credit you and/or your Team for the Entry, identify you and/or your Team as a Winner, or other similar notice; and
  • as otherwise noted in these Rules or as necessary for The Organizer to meet their obligations under these Rules or applicable law.

The Organizer only requires name and email address to be submitted for the participant to participate in this Challenge for its uses as outlined in this Section 19.

Please read the AIcrowd Terms and Conditions, Participation Terms carefully to understand how your data may be used by AIcrowd SA.

21. Additional Terms and Conditions

If The Organizer determines, in their sole discretion, that any portion of this Challenge is compromised by virus, bugs, unauthorized human intervention, or any other causes beyond its control, that in the sole opinion of The Organizer corrupts, or impairs the administration, security, fairness or proper participation in/of the Challenge, The Organizer reserves the right to (a) cancel the Challenge; (b) pause the Challenge until such time the aforementioned issues may be resolved; or (c) consider only those Entries submitted prior to the when the Challenge was so compromised for the prizes.

To the fullest extent permitted by applicable law, you agree that The Organizer, Helper Entities, and each of their directors, officers, employees, agents and assigns, will not be liable for personal injuries, death, damages, expenses or costs or losses of any kind resulting from participation or inability to participate in this Challenge or acceptance of or use or inability to use a prize or parts thereof including, without limitation, claims, suits, injuries, losses and damages related to personal injuries, death, damage to or destruction of property, rights of publicity or privacy, defamation or portrayal in a false light (whether intentional or unintentional), whether under a theory of contract, tort (including negligence), warranty or other theory.

Your use of any other products and services required by these Rules, whether required by these Rules or not, are subject to the terms and conditions associated with such products or services, including the AIcrowd site and services.

In the event any clause or provision of these Rules prove unenforceable, void or incomplete, the validity of the other conditions will remain unaffected.