Loading
0 Follower
0 Following
jon_francis

Location

US

Badges

0
0
0

Activity

Dec
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Mon
Wed
Fri

Challenge Categories

Loading...

Challenges Entered

Latest submissions

No submissions made in this challenge.
Participant Rating
Participant Rating
jon_francis has not joined any teams yet...

Learn-to-Race: Autonomous Racing Virtual Challenge

Announcing L2R Challenge white paper

Over 3 years ago

We have posted a new white paper, which summarises the 2022 Learn-to-Race (L2R) Autonomous Racing Virtual Challenge and formally introduces the new L2R Task 2.0 evaluation benchmark that we used in this competition. https://learn-to-race.org/2022/05/04/whitepaper_l2rarvc/

We present the results of our autonomous racing virtual challenge, based on the newly-released Learn-to-Race (L2R) simulation framework, which seeks to encourage interdisciplinary research in autonomous driving and to help advance the state of the art on a realistic benchmark. Analogous to racing being used to test cutting-edge vehicles, we envision autonomous racing to serve as a particularly challenging proving ground for autonomous agents as: (i) they need to make sub-second, safety-critical decisions in a complex, fast-changing environment; and (ii) both perception and control must be robust to distribution shifts, novel road features, and unseen obstacles. Thus, the main goal of the challenge is to evaluate the joint safety, performance, and generalisation capabilities of reinforcement learning agents on multi-modal perception, through a two-stage process. In the first stage of the challenge, we evaluate an autonomous agentโ€™s ability to drive as fast as possible, while adhering to safety constraints. In the second stage, we additionally require the agent to adapt to an unseen racetrack through safe exploration. In this paper, we describe the new L2R Task 2.0 benchmark, with refined metrics and baseline approaches. We also provide an overview of deployment, evaluation, and rankings for the inaugural instance of the L2R Autonomous Racing Virtual Challenge (supported by Carnegie Mellon University, Arrival Ltd., AICrowd, Amazon Web Services, and Honda Research), which officially used the new L2R Task 2.0 benchmark and received over 20,100 views, 437 active participants, 46 teams, and 733 model submissions โ€“ from 88+ unique institutions, in 58+ different countries. Finally, we release leaderboard results from the challenge and provide description of the two top-ranking approaches in cross-domain model transfer, across multiple sensor configurations and simulated races.

[Call for Papers โ€” Extended] 2nd Workshop on AI for Autonomous Driving at IJCAI 2022

Over 3 years ago

Hello all,

We have extended the paper submission deadline for the 2nd Workshop on Artificial Intelligence for Autonomous Driving (AI4AD), co-located with IJCAI-ECAI 2022, to May 20, 2022.

We extend special invitation to all challenge participants!

Workshop website: https://learn-to-race.org/workshop-ai4ad-ijcai2022/

All papers related to autonomous driving are welcome (4-page extended abstracts or 8-page full papers; page count does not include references or appendices), especially those academic manuscripts that describe your research, development, and experiments on the L2R Autonomous Racing Virtual Challenge (L2R-ARVC). We provide a summary of the Challenge, for everyoneโ€™s convenience: https://arxiv.org/pdf/2205.02953.pdf

As the goal is to aggregate all efforts in relevant areas, dual submission is allowed: feel free to submit work-in-progress, work under review, or work already accepted/published elsewhere.

Start a submission: https://cmt3.research.microsoft.com/AI4AD2022

Important dates (all deadlines are in Central European Time (CET), UTC +1, Paris, Brussels, Vienna):

  • Paper submissions due: 13 May 2022 20 May 2022
  • Author notification: 3 June 2022
  • Workshop: 23 July 2022

Everyone is welcome to attend, in-person and/or online. If you are interested, you can subscribe to our mailing list for updates, here: https://lnkd.in/eBHUfFn

Best Regards,
Organizers

  • Jonathan Francis; CMU + Bosch Research
  • Xinshuo Weng; CMU + NVIDIA Research
  • Hitesh Arora; Amazon
  • Siddha Ganju; NVIDIA
  • Bingqing Chen; CMU
  • Daniel Omeiza; Oxford
  • Jean Oh; CMU
  • Eric Nyberg; CMU
  • Sylvia L. Herbert; UCSD

Call for Papers: Workshop on AI for Autonomous Driving, at the International Joint Conference on Artificial Intelligence (IJCAI 2022)

Almost 4 years ago

Hello all,

We are happy to announce the 2nd Workshop on Artificial Intelligence for Autonomous Driving (AI4AD), co-located with the International Joint Conference on Artificial Intelligence (IJCAI 2022), to be held in Vienna and online.

Workshop website: https://learn-to-race.org/workshop-ai4ad-ijcai2022/

All papers related to autonomous driving are welcome (4-page extended abstracts or 8-page full papers; page count does not include references or appendices), especially those academic manuscripts that describe your research, development, and experiments on the L2R Autonomous Racing Virtual Challenge.

As the goal is to aggregate all efforts in relevant areas, dual submission is allowed: feel free to submit work-in-progress, work under review, or work already accepted/published elsewhere.

Start a paper submission: https://cmt3.research.microsoft.com/AI4AD2022

Important dates (all deadlines are in Central European Time (CET), UTC +1, Paris, Brussels, Vienna):

  • Paper submissions due: 13 May 2022
  • Author notification: 3 June 2022
  • Workshop: 23 July 2022

Everyone is welcome to attend, in-person and/or online. If you are interested, you can subscribe to our mailing list for updates, here: https://lnkd.in/eBHUfFn

Organizers:

  • Jonathan Francis; CMU + Bosch Research
  • Xinshuo Weng; CMU + NVIDIA Research
  • Hitesh Arora; Amazon
  • Siddha Ganju; NVIDIA
  • Bingqing Chen; CMU
  • Daniel Omeiza; Oxford
  • Jean Oh; CMU
  • Eric Nyberg; CMU
  • Sylvia L. Herbert; UCSD

Updates to timelines

Almost 4 years ago

We will send out a separate notice, before the launch of Stage 2 (see here for more info: [Round 2] Launch - Expected Date - #3 by jon_francis).

Not to worry about the number of submissions left. Once Stage 2 opens, participants will be able to make submissions at a much higher frequency (compared to Stage 1) and we will resume the standard request protocol (๐Ÿ—๏ธ Claim Your Training Credits) for AWS credits.

Clarification on input sensors during evaluation

Almost 4 years ago

Bottom line: yes, we will allow access to the semseg cameras, during the 1-hour practice period in Stage 2

Regarding Stage 2 evaluation

Almost 4 years ago

Thanks for the note. Yes, you will be able to upload pre-trained models in Stage 2. Additionally, those models can perform further optimizer updates, through the practice period. More info here: [Round 2] Launch - Expected Date - #3 by jon_francis

[Round 2] Launch - Expected Date

Almost 4 years ago

Thanks for your continued patience on the launch of Stage 2!

We are working to incorporate participant feedback/concerns, before the Stage 2 launch, to ensure that Stage 2 remains both fair and accommodating. We are again expecting a launch this week, as we are in the final stages of testing various model types and configurations.

There will be no restriction on the submission frequency, subject to the serverโ€™s ability to perform the โ€˜practiceโ€™ phases and main evaluations, in a timely manner. We will adopt the same protocol as before, for allowing teams to request AWS credits: ๐Ÿ—๏ธ Claim Your Training Credits

A brief word about Stage 2: this phase of the competition really tests agentsโ€™ abilities to safely generalise to unseen environments. We encourage participants to optimise their approaches, specifically for this safe generalisation capability, by experimenting offline with the Anglesey track as a target environment. Transfer learning techniques such as domain adaptation, *-shot learning, knowledge distillation, or self-supervision/self-training โ€” e.g., making use the sensory information that is available during the โ€˜practice phaseโ€™ but will not be available during the main evaluation โ€” may prove useful; leveraging domain knowledge about the road features may be crucial. Indeed, participants will submit their models for a 1-hour practice period, wherein agents will be free to perform optimizer updates. Afterwards, the resultant checkpoint from the practice phase will be tested on the simulated North Road track at Las Vegas Motor Speedway.

Very much looking forward to it!

Collecting Data through Arrival Simulator

Almost 4 years ago

There are a couple options here:

  1. Save an agentโ€™s transitions, automatically: agents/sac_agent.py ยท master ยท Learn to Race / l2r-starter-kit ยท GitLab
    • Save queue: L82-93
    • record_experience configuration flag: L328-342, L357-359, L479-494
  2. Record transitions from the simulator, manually; see the following thread for some hints:

Get input actions given directly to simulator for creating Imitation Learning data

Almost 4 years ago

Yes, the reason why recording transitions from keyboard-based actions is not supported is, indeed, because it bypasses the L2R framework: the python code would then be unable to intercept the commands.

However, we have contacted the developers of the simulator and will continue to look into it!

Recent changes to the StarterKit and Code Documentation

Almost 4 years ago

Recent changes to the StarterKit and Code Documentation

The post concerns recent changes and patches made to the starter kit. These patches deal with recent issues that contestants were facing, regarding: stability, metrics calculations, and agent initialisation. Additionally, camera configuration interfaces were optimised for simplicity, and codebase documentation was updated and extended. Some changes included in this patch necessitate re-evaluation of previous submissions, which may affect leaderboard ranking. See below.

Changelog:

  1. Simplified the camera interface code, for environment/simulator interaction.
  2. Added additional camera configurations for other sensors that are permitted for use during training.
  3. Resolved agent initialisation issues, related to yaw ambiguity; this corrects situations where agents re-spawned with incorrect orientation after failing track segments. Previously, this produced spurious results, where agents were assigned incorrect segment-completion metrics, during evaluation. This fix may affect leaderboard results.
  4. Provided additional agent tracking information, displayed on the console during training and evaluation.
  5. Revised code documentation, to incorporate recent inquiries:
  6. [Edit, 17 Jan 2022 16:26 ET]: Migrated Anglesey track map json file from the official L2R repo to the StarterKit

We hope participants find these changes helpful.

Participants are strongly encouraged to incorporate these changes, as soon as possible. In order to do this, please initiate a merge request, from the upstream repository to your respective forked repositories. https://gitlab.aicrowd.com/learn-to-race/l2r-starter-kit

Need your Inputs for improving competition

Almost 4 years ago

Thanks โ€” added to the documentation, along with some other suggestions: https://learn-to-race.readthedocs.io/en/latest/getting_started.html

Increasing the Flow of Time in the Simulator

Almost 4 years ago

Thanks, weโ€™ve reached out to the developers of the simulator.

That said, we do remain excited about seeing approaches that have taken model inference and optimisation latencies into consideration, in the control policy design. These approaches would be particularly viable for simulation-to-real transfer.

Information Available on the Hidden Stage 2 Track

Almost 4 years ago

You are correct that using Las Vegas track information is not permitted.

While the map information has already been omitted from the simulator, itself, we have also removed the json layout, to avoid confusion.

Thanks!

jon_francis has not provided any information yet.