Loading
188.7k
3170
384
6323

๐Ÿ† Meta CRAG Challenge 2024: Winners Announcement

Hello Participants,

We are excited to announce the winners of the Meta CRAG Challenge 2024! We deeply appreciate the efforts and contributions of every participant in helping advance Retrieval-Augmented Generation (RAG) systems. Over the last four months, the challenge saw over 2,000 participants from across the globe, with more than 5,500 submissions. Below are the winners for each task. You can find details of the final evaluation process here.


๐Ÿงฉ Task 1: Retrieval Summarisation

  • ๐Ÿฅ‡ 1st Place: Team db3
  • ๐Ÿฅˆ 2nd Place: Team md_dh
  • ๐Ÿฅ‰ 3rd Place: Team ElectricSheep

Category Winners:

  • ๐ŸŒธ Simple with Condition: Team dummy_model
  • ๐ŸŒธ Set: Team dummy_model
  • ๐ŸŒธ Comparison: Team dRAGonRAnGers
  • ๐ŸŒธ Aggregation: Team dummy_model
  • ๐ŸŒธ Multi-hop: Team bumblebee7
  • ๐ŸŒธ Post-processing: Team dRAGonRAnGers
  • ๐ŸŒธ False Premise: Team ETSLab

๐ŸŒ Task 2: Knowledge Graph + Web Retrieval

  • ๐Ÿฅ‡ 1st Place: Team db3
  • ๐Ÿฅˆ 2nd Place: Team APEX
  • ๐Ÿฅ‰ 3rd Place: Team md_dh

Category Winners:

  • ๐ŸŒธ Simple with Condition: Team ElectricSheep
  • ๐ŸŒธ Set: Team ElectricSheep
  • ๐ŸŒธ Comparison: Team dRAGonRAnGers
  • ๐ŸŒธ Aggregation: Team ElectricSheep
  • ๐ŸŒธ Multi-hop: Team ElectricSheep
  • ๐ŸŒธ Post-processing: Team ElectricSheep
  • ๐ŸŒธ False Premise: Team Future

๐Ÿค– Task 3: End-to-End Retrieval-Augmented Generation

  • ๐Ÿฅ‡ 1st Place: Team db3
  • ๐Ÿฅˆ 2nd Place: Team APEX
  • ๐Ÿฅ‰ 3rd Place: Team vslyu-team

Category Winners:

  • ๐ŸŒธ Simple with Condition: Team StarTeam
  • ๐ŸŒธ Set: Team md_dh
  • ๐ŸŒธ Comparison: Team dRAGonRAnGers
  • ๐ŸŒธ Aggregation: Team md_dh
  • ๐ŸŒธ Multi-hop: Team ETSLab
  • ๐ŸŒธ Post-processing: Team md_dh
  • ๐ŸŒธ False Premise: Team Riviera4

๐Ÿ”ฆ Winner Spotlight Series

Let's see some standout solutions from 2024. These approaches offer valuable insights for participants aiming to build robust multi-modal RAG systems.


๐ŸŒŸ Team db3 โ€“ Overall Winner (1st in all three tasks)

  • Team Members: Jiazun Chen and Yikuan Xia
  • Affiliation: Peking University (PhD candidates under Prof. Gao Jun)
  • Expertise: Community search in large graphs, table fusion, and cross-domain graph alignment.

Solution Highlights:

  • Task 1: Used HTML parsing (BeautifulSoup), LangChain chunking, bge-base-en-v1.5 retriever, custom reranker, and fallback prompts (โ€œI donโ€™t knowโ€) for uncertain outputs.
  • Tasks 2 & 3: Combined web and KG data using a Parent-Child Chunk Retriever, orchestrated LLM-controlled API calls, and added tight reranking constraints.
  • Hallucination Control: Fine-tuned generation on grounded evidence, used calculators for numerical reasoning, and constrained LLM output behaviour.

๐Ÿ”— Read Full Solution


๐ŸŒŸ Team dRAGonRAnGers โ€“ 2nd in Task 1, 3rd in Tasks 2 & 3

  • Team Members: Students from POSTECH's Data Systems Lab
  • Motivation: Build cost-efficient, real-world-ready RAG pipelines.

Solution Highlights:

  • Retrieval Gating: Skipped retrieval when LLM confidence was high (likely via fine-tuned LLaMA variants).
  • Answer Verification: Implemented a self-consistency pass to validate generated responses.
  • Optimisation Focus: Balanced latency, retrieval cost, and hallucination reduction with a two-stage architecture.

๐Ÿ”— Read Full Solution


๐ŸŒŸ md_dh โ€“ 3rd in Task 1, Category Winner in Tasks 2 & 3

  • Team Member: Mitchell DeHaven
  • Affiliation: ML Engineer at Darkhive; previously at USCโ€™s ISI
  • Background: NLP, speech systems, and logic-based reasoning.

Solution Highlights:

  • Pipeline (MARAGS): Used BeautifulSoup-based chunking, cross-encoder reranking, and modular LoRA adapters per task.
  • Reliability Features: Filtered for โ€œhittabilityโ€, added fallback prompts, and verified API responses through execution (e.g., eval()).
  • Reasoning Tools: Used Chain-of-Thought prompting for complex logic and table operations.

๐Ÿ”— Read Full Solution