Organization
Location
Badges
Activity
Challenge Categories
Challenges Entered
Create Context-Aware, Dynamic, and Immersive In-Game Dialogue
Latest submissions
Improve RAG with Real-World Benchmarks | KDD Cup 2025
Latest submissions
Generate Synchronised & Contextually Accurate Videos
Latest submissions
Improve RAG with Real-World Benchmarks
Latest submissions
Revolutionise E-Commerce with LLM!
Latest submissions
Revolutionising Interior Design with AI
Latest submissions
Multi-Agent Dynamics & Mixed-Motive Cooperation
Latest submissions
Advanced Building Control & Grid-Resilience
Latest submissions
Specialize and Bargain in Brave New Worlds
Latest submissions
Trick Large Language Models
Latest submissions
Shopping Session Dataset
Latest submissions
Understand semantic segmentation and monocular depth estimation from downward-facing drone images
Latest submissions
Audio Source Separation using AI
Latest submissions
Identify user photos in the marketplace
Latest submissions
A benchmark for image-based food recognition
Latest submissions
Using AI For Building’s Energy Management
Latest submissions
Learning From Human-Feedback
Latest submissions
What data should you label to get the most value for your money?
Latest submissions
Interactive embodied agents for Human-AI collaboration
Latest submissions
Specialize and Bargain in Brave New Worlds
Latest submissions
Amazon KDD Cup 2022
Latest submissions
Behavioral Representation Learning from Animal Poses.
Latest submissions
Airborne Object Tracking Challenge
Latest submissions
ASCII-rendered single-player dungeon crawl game
Latest submissions
Latest submissions
5 Puzzles 21 Days. Can you solve it all?
Latest submissions
Measure sample efficiency and generalization in reinforcement learning using procedurally generated environments
Latest submissions
5 Puzzles 21 Days. Can you solve it all?
Latest submissions
Self-driving RL on DeepRacer cars - From simulation to real world
Latest submissions
3D Seismic Image Interpretation by Machine Learning
Latest submissions
5 Puzzles 21 Days. Can you solve it all?
Latest submissions
Latest submissions
5 Puzzles 21 Days. Can you solve it all?
Latest submissions
5 Puzzles 21 Days. Can you solve it all?
Latest submissions
Multi-Agent Reinforcement Learning on Trains
Latest submissions
A dataset and open-ended challenge for music recommendation research
Latest submissions
A benchmark for image-based food recognition
Latest submissions
Latest submissions
Sample-efficient reinforcement learning in Minecraft
Latest submissions
Latest submissions
5 Puzzles, 3 Weeks. Can you solve them all? 😉
Latest submissions
Multi-agent RL in game environment. Train your Derklings, creatures with a neural network brain, to fight for you!
Latest submissions
Predicting smell of molecular compounds
Latest submissions
Find all the aircraft!
Latest submissions
5 Problems 21 Days. Can you solve it all?
Latest submissions
5 Puzzles 21 Days. Can you solve it all?
Latest submissions
5 Puzzles, 3 Weeks | Can you solve them all?
Latest submissions
Latest submissions
Grouping/Sorting players into their respective teams
Latest submissions
5 Problems 15 Days. Can you solve it all?
Latest submissions
5 Problems 15 Days. Can you solve it all?
Latest submissions
Predict Heart Disease
Latest submissions
5 PROBLEMS 3 WEEKS. CAN YOU SOLVE THEM ALL?
Latest submissions
Latest submissions
Remove Smoke from Image
Latest submissions
Classify Rotation of F1 Cars
Latest submissions
Can you classify Research Papers into different categories ?
Latest submissions
Can you dock a spacecraft to ISS ?
Latest submissions
Multi-Agent Reinforcement Learning on Trains
Latest submissions
Multi-Class Object Detection on Road Scene Images
Latest submissions
Localization, SLAM, Place Recognition, Visual Navigation, Loop Closure Detection
Latest submissions
Localization, SLAM, Place Recognition
Latest submissions
Detect Mask From Faces
Latest submissions
Identify Words from silent video inputs.
Latest submissions
A Challenge on Continual Learning using Real-World Imagery
Latest submissions
Latest submissions
See Allgraded | 200977 |
Music source separation of an audio signal into separate tracks for vocals, bass, drums, and other
Latest submissions
Amazon KDD Cup 2023
Latest submissions
Amazon KDD Cup 2023
Latest submissions
Make Informed Decisions with Shopping Knowledge
Latest submissions
Generate Videos with Temporal and Semantic Audio Sync
Latest submissions
Create Videos with Spatially Aligned Stereo Audio
Latest submissions
Build Context-Aware Conversational NPC Agents
Latest submissions
Task-Oriented Conversational AI for NPC Agents
Latest submissions
Context-Aware & Task-Driven NPC Agents
Latest submissions
Participant | Rating |
---|---|
![]() |
0 |
![]() |
0 |
![]() |
0 |
![]() |
0 |
Participant | Rating |
---|
-
powerpuff AI Blitz XView
-
teamux NeurIPS 2021 - The NetHack ChallengeView
-
tempteam NeurIPS 2022 IGLU ChallengeView
-
testing Sound Demixing Challenge 2023View
-
grogu HackAPrompt 2023View
-
apollo11 MosquitoAlert Challenge 2023View
-
testteam Commonsense Persona-Grounded Dialogue Challenge 2023View
-
temp-team Generative Interior Design Challenge 2024View
Commonsense Persona-Grounded Dialogue Challenge
[Paper submission deadline for EMNLP workshop 2025]
18 days agoQ1 – Submission Deadline
The exact deadline for submitting the technical report hasn’t been confirmed yet. Workshop submissions are independent of both the main EMNLP deadline (May 19) and the Industry Track (July 4).
For the shared task, we’ll align the paper submission timeline with the end of the challenge (June 30) and the announcement of results. Right now, we’re tentatively aiming for a deadline around late August, but this may change depending on the official EMNLP workshop schedule, which we’re still waiting on.
Q2 – Who Can Submit
Anyone who participates in the challenge can submit a paper; it’s not limited to the top teams or winners. We encourage all participants to share their work.
Q3 – Publication Details
At the moment, we don’t have confirmation on whether shared task papers will be included in the official EMNLP proceedings. That decision will come from the workshop organisers, and we’ll share an update once we hear from them.
🚀 Round 2 is now live!
25 days agoCPDC Winner Spotlight: 💡 Ideas to Improve Solutions For Round 2
About 1 month agoAs we prepare for Round 2, we want to spotlight the winning strategies from CPDC 2023. These highlights offer practical insights and implementation tips to help strengthen your approach.
Task 2 Winner
Key Insight: Synthetic Data Generation + Modern Architectures
First Place: Kuan-Yen Lin
Username: @biu_biu
Background: NLP practitioner specialising in dialogue systems and commonsense reasoning.
Winning Strategy
Multi-Phase Approach:
- Baseline evaluation
- Dataset augmentation
- Model fine-tuning
Key Methods:
- Evaluated ComFact baseline on hidden test set
- Merged Conv2 and Peacock datasets
- Generated 20,000 synthetic conversations using GPT-3.5-Turbo
- Fine-tuned DeBERTa-V3 with comprehensive hyperparameter search
- Predicted head/tail facts both separately and jointly
Insight: This two-path evaluation enabled structural interpretations of the task. His system—powered by synthetic data, modern architecture, and rigorous tuning—proved effective for accurate persona-grounded knowledge linking.
Implementation Tips
Establish a strong baseline:
- Run the baseline model on test data
- Use results to identify weaknesses
Leverage synthetic data:
- Combine existing persona datasets
- Use GPT-3.5-Turbo to generate new labelled conversations
- Balance the dataset for broader coverage
Optimise model performance:
- Use DeBERTa-V3 or equivalent
- Perform deep hyperparameter tuning
- Experiment with separate vs. joint prediction of facts
Task 2 Runner-Up
Key Insight: Relational Structure Understanding Through Natural Language
Second Place: Jinrui Liang
Username: @TieMoJi
Background: AI algorithm engineer at NetEase Games focusing on deep learning and NLP.
Winning Strategy
Core Focus: Enhancing relational understanding between persona facts using natural language templates
Key Methods:
- Augmented data with head/tail entities and relations
- Translated structured relations into natural language form
- Reframed task as sentence-triple correlation
- Designed multi-prompt training setup
- Used multi-loss optimisation and model fusion
- Adopted mixed precision training
- Applied sample resampling for class balance
Insight: His layered training and reconstruction strategy produced a generalisable architecture grounded in both theoretical and engineering best practices.
Implementation Tips
Refine data structure:
- Explicitly encode relational structure
- Use templates to express (head, relation, tail) in plain language
Advance training techniques:
- Apply multi-prompt strategies
- Incorporate multi-loss training
- Experiment with model fusion
Improve efficiency:
- Use mixed precision to accelerate training
- Apply resampling to fix class imbalance
Task 2 Third Place
Key Insight: Generative LLMs for Multi-Turn Dialogue Processing
Third Place: Yiyang Zheng
Username: @yiyang_zheng
Team: Yiyang Zheng, Yingwu Yang
Background:
- Yiyang Zheng: Undergraduate student at Shanghai University focused on NLP
- Yingwu Yang: Machine learning practitioner in the financial sector
Winning Strategy
Core Focus: Using generative LLMs to manage complex multi-turn dialogues with subtle persona reasoning
Key Methods:
- Fine-tuned Phi-2 on both official and open-source datasets
- Selected Phi-2 for its balance between reasoning and efficiency
- Focused on implicit and ambiguous dialogue-fact connections
Insight: Showed that generative LLMs like Phi-2 can effectively handle multi-turn, persona-grounded dialogue by reasoning through subtle context cues.
Implementation Tips
Choose the right LLM:
- Consider compact models with strong reasoning (e.g., Phi-2)
- Evaluate for multi-turn conversational capability
Focus on implicit reasoning:
- Curate training examples with subtle persona links
- Emphasise commonsense bridging in dialogue-fact alignment
Fine-tune for generalisability:
- Combine various datasets
- Retain a balance of general fluency and persona specificity
- Test against diverse scenarios
CPDC Winner Spotlight: 💡 Strategies to Improve Solutions For Task 2
About 1 month agoHope you’re enjoying Round 1 of the CPDC 2025 Challenge! As you prepare for the upcoming round, we’re excited to share a spotlight on the winning strategies from CPDC 2023. These highlights offer practical insights and implementation tips to help strengthen your approach.
The solutions for Task 1 of CPDC 2023 (a dialogue generation task) are most closely related to Task 2 of CPDC 2025, which also focuses on persona-consistent dialogue generation.
Task 1 Winner
First Place: Kaihua Ni
Key Insight: Combining LLM Fine-tuning with Advanced Prompt Engineering
Username: @ni_kai_hua
Background: AI graduate from University of Leeds with experience at Augmentum and CareerBuilder. Specialises in AI, deep learning, and language dynamics.
Winning Strategy:
Two-Pronged Approach:
- Fine-tuned an LLM to emulate specific individuals
- Engineered precise, persona-aligned prompts to guide output generation
Key Methods:
Fine-Tuning with Transfer Learning:
- Used curated datasets (dialogues, writings) aligned with target personas
- Adapted models to reflect individual styles and semantics
Advanced Prompt Engineering:
- Defined clear conversational goals
- Subtly incorporated persona traits
- Maintained coherence across multiple dialogue turns
Dialogue Coherence:
- Applied attention window tuning and context control
Custom Evaluation Loop:
- Built bespoke evaluation metrics aligned with CPDC scoring
- Iterative refinement based on metrics
Ethical Safeguards:
- Embedded privacy protections
- Prevented harmful/inappropriate content
- Ensured ethical persona emulation
Insight: Demonstrated how LLMs can generate nuanced, human-like dialogue without compromising integrity
Implementation Tips
Want to apply Kaihua’s approach to your solution? Here are some practical steps:
For the fine-tuning component:
- Start with a smaller, more efficient LLM as your base model
- Create a curated dataset that specifically represents your target personas
- Focus on preserving stylistic elements in your training data, not just semantic content
For the prompt engineering component:
- Structure your prompts with clear sections for conversation goal, persona traits, and dialogue history
- Experiment with different attention window sizes to find optimal context retention
- Implement a simple evaluation loop to measure improvements against CPDC’s scoring criteria
Task 1 Runner-Up
Second Place: Zhiyu Wang
Key Insight: Principles-Driven Prompt Engineering for Persona Alignment
Username: @wangzhiyu918
Team: Zhiyu Wang, Puhong Duan, Zhuojun Xie, Wang Liu, Bin Sun, Xudong Kang, Shutao Li
Background: PhD candidate at Hunan University focusing on vision-language understanding, LLMs, and multi-modal LLMs.
Winning Strategy:
Core Focus: Prompt engineering inspired by recent LLM advancements (ChatGPT, LLaMA)
Key Methods:
- Studied The Art of ChatGPT Prompting guide
- Based strategy on three principles:
- Clarity: Specific language for accurate comprehension
- Conciseness: Avoided unnecessary verbosity
- Relevance: Ensured alignment with dialogue context and persona
- Refined prompts using GPT-4
- Deployed carefully designed prompt (available in their repository)
Insight: The methodical and prompt-focused design contributed to generating highly coherent, persona-aligned responses
Implementation Tips
Want to apply Zhiyu’s approach to your solution? Here are some practical steps:
Study effective prompting techniques:
- Review prompting guidelines and best practices from established sources
- Analyze the structure of successful prompts for persona-based dialogue
Apply the three core principles:
- Clarity: Replace vague instructions with specific directives
- Conciseness: Remove redundant or tangential information from prompts
- Relevance: Ensure every element of your prompt directly contributes to persona alignment
Iterative refinement:
- Use GPT-4 or similar models to test prompt variations
- Create a systematic testing framework to compare prompt performance
Task 1 Third Place
Third Place: Kaidi Yan
Key Insight: Strategic Minimalism in Prompt Design
Username: @kevin_yan
Team: Kaidi Yan, Jiayu Liu
Background: Software engineer at a large technology company, primarily working on server-side C++ development, with recent focus on LLMs.
Winning Strategy:
Core Focus: Targeted prompt engineering, carefully adapted to new scoring rules and aimed at simulating natural dialogue flow
Key Methods:
- Defined clear objective at the start of the prompt
- Designed special prompts for initial utterances to simulate realistic conversation openers
- Merged all prior utterances into a single user prompt instead of user/system pairs
- Post-processed model responses for completeness and fluency
- Deliberately kept prompt length short to avoid overfitting
Insight: While brevity may have limited peak performance, his approach prioritised adaptability and relevance — a strategic trade-off for generalisation
Implementation Tips
Want to apply Kaidi’s approach to your solution? Here are some practical steps:
Simplify your prompt structure:
- Start with a clear, concise objective statement
- Remove unnecessary complexity and instructions
- Focus on the essential elements needed for persona alignment
Improve conversation handling:
- Create specialised handling for conversation starters
- Experiment with merging dialogue history into unified context
- Implement lightweight post-processing for response quality
Balance brevity with performance:
- Test incrementally shorter prompts while monitoring performance
- Identify which prompt elements contribute most to score improvement
- Find the optimal balance between prompt length and effectiveness
Meta CRAG - MM Challenge 2025
🚨 Submission Selection Deadline: 23rd June 2025, 12:00 UTC (noon)
4 days agoHello, thanks for flagging this. You are right, it should be Task 2: Multi-source Augmentation and Task 3: Multi-turn QA. It is fixed now.
🚨 [Phase 2 Now Open] Important Updates and Announcements
26 days agoHi, as per the rules, you cannot join Round 2. Round 1 was open to all teams, but to enter Round 2, your team must have made at least one successful submission in Round 1
Meta CRAG-MM Challenge: Office Hour #1 Recording
29 days agoHello all,
We recently hosted the first office hour for the Meta CRAG-MM Challenge, where the organising team from Meta shared key insights into Round 2, including the dataset, task structure, and Mock API. The session concluded with a Q&A addressing queries from participants.
Missed the session? Watch the full recording here: https://youtu.be/xGfOcAQ2tV8
Slides: View the slide deck
🏆 Behind the Winning Strategy of Team db3 [Meta CRAG 2024]
About 1 month ago🏆 Winner Spotlight Series md_d [Meta KDD Cup 2024]
About 1 month ago🏆 Learning from Team dRAGonRAnGers’ Strategy [Meta CRAG 2024]
About 1 month ago🏆 Winner Spotlight Series md_d [Meta KDD Cup 2024]
About 1 month agoIn this spotlight, we explore the modular, logic-aware system from Mitchell DeHaven, username md_dh, secured 3rd place in Task 1.
Mitchell’s Pipeline – MARAGS System Overview
A custom-built Multi-Adapter Retrieval-Augmented Generation System (MARAGS) included:
- Document chunking via BeautifulSoup (<2000 characters per segment)
- Cross-encoder reranking of segments by query relevance
- Modular LoRA adapters fine-tuned per subtask
- CoT prompting for complex reasoning
- Evaluation through code execution of API responses
Hallucination Control and Answer Reliability
- Contexts were pre-filtered for “hittability”—whether they included the actual answer
- If uncertain, the model was explicitly prompted to output: “I don’t know”
- API call responses were tested via
eval()
for execution correctness
What Carries Over to 2025?
Mitchell’s strategy remains highly relevant for this year’s MM-RAG focus:
- Modular tuning scales to multi-modal pipelines via adapter switching
- Hittability filtering helps reduce noise across web, image, and KG fusion
- Evaluation via execution mirrors this year’s emphasis on verifiability and trust
- Chain-of-Thought prompting supports visual reasoning and multi-hop QA
Mitchell DeHaven is a machine learning engineer at Darkhive, with prior experience in NLP and speech systems at USC’s ISI.
Stay tuned for more insights from past CRAG standouts—and good luck with your submissions!
Read other winning strategies here: db3 and dRAGonRAnGers
🏆 Learning from Team dRAGonRAnGers’ Strategy [Meta CRAG 2024]
About 1 month agoWinner Spotlight Series: Day 2 – dRAGonRAnGers
2nd Place in Task 1 | 3rd Place in Task 2 & 3
As we look ahead to the Meta CRAG-MM Challenge 2025, it’s worth revisiting the inventive strategies that emerged last year. In this edition of the Winner Spotlight series, we highlight Team dRAGonRAnGers from POSTECH, who earned podium finishes across all three tasks with their pragmatic and efficiency-driven approach to RAG. You can also read the complete technical report over here.
Their work is a lesson in thoughtful engineering—optimising for real-world constraints without compromising answer quality.
Challenge Recap: A Demanding Test of RAG
The 2024 CRAG Challenge pushed participants to develop Retrieval-Augmented Generation systems that could reason over web documents and structured graphs with minimal hallucinations. Success depended not just on accurate retrieval, but also on balancing cost, latency, and model robustness.
Core Insight: Trust the Model—But Verify
At the heart of the dRAGonRAnGers’ approach was an elegant refinement of the RAG pipeline aimed at:
- Avoiding unnecessary retrievals when the LLM already had a high-confidence response
- Preventing hallucinations by validating outputs through self-reflection
Their strategy revolved around a two-stage enhancement process:
Step 1: Retrieval Bypass via LLM Confidence
Rather than treating retrieval as mandatory, the team built a mechanism to assess the confidence of the LLM’s internal knowledge (likely using fine-tuned LLaMA variants). When confidence crossed a defined threshold, the system skipped retrieval entirely, saving compute and latency.
This adaptive gating proved particularly effective for factoid or frequently seen questions—an increasingly relevant optimisation for production-grade QA systems.
Step 2: Post-Generation Answer Verification
Even when retrieval was bypassed or ambiguous data was returned, the team added a verification layer: a second pass through the LLM to judge the trustworthiness of the output.
This form of self-consistency checking acted as a safeguard, filtering out hallucinations and improving answer reliability.
Outcome: Efficient, Accurate, Scalable
The combination of selective retrieval and post-hoc verification resulted in:
- Lowered system load without sacrificing accuracy
- Fewer hallucinations, particularly in borderline or low-signal queries
- Improved responsiveness for multi-turn and interactive scenarios
In a challenge that increasingly reflects real-world constraints, their system offered a compelling balance between precision and pragmatism.
Meet the Minds Behind dRAGonRAnGers
The team hails from the Data Systems Lab at POSTECH, blending deep academic research with a drive to tackle applied AI problems.
Their participation was driven by a shared goal: explore the real-world trade-offs of building reliable, cost-efficient RAG systems.
Lessons for 2025: Efficiency Is a Competitive Advantage
While the CRAG-MM Challenge 2025 introduces multi-modal and multi-turn elements, the principles behind dRAGonRAnGers’ design carry forward:
- Retrieval Gating: In image-heavy queries, selectively triggering retrieval (e.g., only when OCR or visual tags lack clarity) could save valuable inference time.
- Answer Verification: With more complex inputs (e.g., image-KG hybrids), validating generated answers before surfacing them remains crucial.
- Resource-Aware Design: Their cost-conscious pipeline offers a strong blueprint for systems facing real-time or on-device constraints.
Stay tuned for more Winner Spotlights—and best of luck as you shape your own strategy for this year’s challenge.
🏆 Behind the Winning Strategy of Team db3 [Meta CRAG 2024]
About 1 month agoAs we gear up for the new round of Meta CRAG MM Challenge 2025, let’s revisit the standout approaches from last year’s competition. In this Winner Spotlight, we dive into the strategy behind Team db3, who took the top spot across all three tasks in the Meta KDD Cup 2024 – CRAG Challenge. You can also read the complete technical report over here.
This deep dive is designed to inform and inspire participants aiming to push boundaries in retrieval-augmented generation (RAG) this year.
Challenge Overview: What Was CRAG 2024 All About?
The 2024 CRAG challenge focused on building RAG systems capable of sourcing relevant knowledge from web documents and mock knowledge graphs to answer complex queries. It tested not just retrieval and generation quality but also robustness and hallucination control.
Team db3, comprising Jiazun Chen and Yikuan Xia from Peking University, achieved:
-
1st in Task 1 – Retrieval Summarisation (28.4%)
-
1st in Task 2 – Knowledge Graph + Web Retrieval (42.7%)
-
1st in Task 3 – End-to-End RAG (47.8%)
Task 1: Retrieval Summarisation
Team db3 engineered a layered retrieval-generation pipeline:
- Parse HTML with
BeautifulSoup
- Chunk text using
LangChain
into retrievable segments - Retrieve with the
bge-base-en-v1.5
model - Rerank results using a custom relevance model
- Add dynamic fallback: prompt the model to say “I don’t know” when uncertain
Tasks 2 & 3: Knowledge Graph + Web Integration
Their architecture evolved with more complex inputs and integrations:
- Combine structured data (mock KGs) and unstructured web pages
- Implement a Parent-Child Chunk Retriever for fine-grained retrieval
- Use a tuned LLM to orchestrate API calls via a controlled, regularised set
- Perform heavy reranking to ensure only the most relevant data reached the generator
Hallucination Mitigation
To keep outputs grounded and reliable, the team:
- Fine-tuned the model to rely only on retrieved evidence
- Added constraints to reduce overconfident generations
- Used Python-based calculators for numerical reasoning tasks
Meet the Team
Jiazun Chen and Yikuan Xia are third-year PhD candidates at Peking University, advised by Professor Gao Jun.
Their research focuses on:
- Community search in massive graph datasets
- Graph alignment for cross-domain analysis
- Table data fusion across heterogeneous sources
What Carries Over from 2024 to 2025?
While the Meta CRAG-MM Challenge 2025 takes a leap into multi-modal and multi-turn territory, several principles from db3’s approach remain highly applicable:
-
Structured + Unstructured Retrieval
db3’s integration of knowledge graphs and web data directly informs Task 2 of CRAG-MM, which fuses image-KG with web search. -
Hallucination Mitigation
Their use of grounded generation and standardised fallback (“I don’t know”) is vital in MM-RAG, where conciseness and truthfulness are tightly evaluated. -
Reranking and Retrieval Granularity
Techniques like Parent-Child Chunk Retrieval can be adapted to visual-context-aware retrieval in 2025. -
LLM-as-Controller
db3’s LLM-mediated API selection prefigures the multi-turn query orchestration required in this year’s task 3.
In short: while the modality has evolved, the core disciplines—retrieval quality, grounding, and structured reasoning—remain front and centre. Studying the 2024 winning strategy is still a powerful head start for 2025.
Stay tuned for the next Winner Spotlight—and good luck with your submissions.
Read other winning strategies here: dRAGonRAnGers and md_dh
📢 📕 Updated Starter-Kit Release - v0.1.1
2 months ago💬 Feedback & Suggestions
2 months agoHi @bunnyveil
This issue is now fixed! You should be able to invite new team members now.
Sounding Video Generation (SVG) Challenge 2024
💬 Feedback & Suggestions
About 1 month agoHello,
The organisers at Sony are currently conducting the final round of human evaluation. As per the challenge rules, the top entries on the final leaderboard will be assessed through human evaluation, and the winning teams will be selected based on the results of this subjective assessment.
We are awaiting the outcome from the organisers and will share an update as soon as we receive it.
Thank you for your patience.
Regarding the final ranking method for Round 2
YesterdayYes, there will be a private leaderboard at the end of Round 2.
The code will be re-run on a private dataset and an updated leaderboard with those scores will be shared upon the competition of challenge.