Loading

The following rules attempt to capture the spirit of the competition and any submissions found to be violating the rules may be deemed ineligible for participation by the organizers.

GENERAL RULES

  • Entries to the MineRL BASALT competition must be “open”. Teams will be expected to reveal most details of their method including source-code and instructions for human feedback collection (special exceptions may be made for pending publications).
  • For a team to be eligible to win, each member must satisfy the following conditions:
    • be at least 18 and at least the age of majority in place of residence;
    • not reside in any region or country subject to U.S. Export Regulations; and
    • not be an organizer of this competition nor a family member of a competition organizer.
  • To receive any awards from our sponsors, competition winners must attend the NeurIPS workshop.
  • Official rule clarifications will be made in the FAQ on the AIcrowd website.
    • The FAQ will be available on the AIcrowd page.
    • Answers within the FAQ are official answers to questions. Any informal answers to questions (e.g., via email) are superseded by answers added to the FAQ.
  • During training and testing, information from the Minecraft simulator can only be extracted through the “step” function from the provided Gym interfaces named on the competition page.
    • It is allowed to use domain knowledge outside of training and testing, though we are particularly interested in approaches that make minimal use of such knowledge. Submissions that rely primarily on domain knowledge are unlikely to win the creativity of research prize. For example, all of the following uses of domain knowledge are allowed:
      • Use manually shaped reward functions that reward approaching tree-like objects
      • Hardcode a rule that enforces that on the 50th timestep, the agent will take a "jump" action
      • Use an edge detector to preprocess video observations
  • In addition to the provided dataset, participants may include additional small datasets in the source file submissions, whose total size should not exceed 30 MB. Pretrained models are only permitted if they were publicly available on June 4, 2021.
    • During the evaluation of submitted code, the individual containers will not have access to any external network in order to avoid any information leak. Relevant exceptions are added to ensure participants can download and use the pre-trained models included in popular frameworks like PyTorch and TensorFlow. Participants can request to add network exceptions for any other publicly available pre-trained models, which will be validated by AICrowd on a case-by-case basis.
    • All submitted code repositories will be scrubbed to remove files larger than 30MB. (It is against the rules to split a custom >30 MB dataset into multiple smaller files. This will be detected during manual review.)
    • Pretrained models are not allowed to have been trained on MineRL or any related or unrelated Minecraft data. The intent of this rule is to allow participants to use models which are, for example, trained on ImageNet or similar datasets.
  • The procedure for the Submission Round is as follows:
    • Teams must submit source code to train their models. This code must terminate within four days on the specified platform, and may use no more than 10 hours of online human feedback. Participants may ask for the human feedback to be allocated to a specific day, but may not ask for a specific time during a specific day.
    • Human comparisons will be used to create preliminary evaluations for submitted models.
    • The top 5 teams for each competition environment will automatically proceed to Final Evaluation. Additional teams will then be selected in descending order of average leaderboard score across all the competition environments, until a total of 50 teams have proceeded to Final Evaluation.
  • The procedure for the Final Evaluation is as follows:
    • Organizers will hire contractors to compare the performance of submissions, using the same evaluation system as used for preliminary evaluations during the Submission Round. It is strictly prohibited for participants to provide evaluations during Final Evaluation.
    • After completing the evaluation, the Validation step begins.
  • The procedure for the Validation step is as follows:
    • Organizers will inspect submitted code for rule compliance.
    • Organizers will work with teams in order to run their training code in order to retrain agents. Teams may be asked to submit additional documentation, particularly if their training requires online human feedback to be given by contractors.
    • If the retrained agents are significantly worse than the agents submitted during the Submission Round (as judged by the competition organizers, at their discretion), the corresponding team will be disqualified. (This check is meant to prevent teams from submitting agents that were produced by a training process other than the one submitted.)
    • Amongst the remaining teams, the winners will be determined according to the leaderboard scores from the Final Evaluation.