Loading
Warm-Up Round: Completed Round 1: Completed Round 2: Completed #nlp #knowledge_graph #language_model #dialogue #llm
57.6k
504
75
2786

πŸ† CPDC 2023: Winners Announcement

Hello Participants,

We’re excited to announce the tentative winners of the Commonsense Persona-Grounded Dialogue Challenge (CPDC) 2023! A big thank you to all who participated, experimented, and pushed the boundaries of persona-grounded dialogue systems. The challenge received hundreds of submissions, and after a rigorous evaluation, here are the top performers in each task.

🧠 Task 1: Commonsense Dialogue Response Generation
πŸ₯‡ 1st Place: @ni_kai_hua – $15,000
πŸ₯ˆ 2nd Place: @wangzhiyu918 – $7,000
πŸ₯‰ 3rd Place: Team justsnail (@jiayu_liu, @kevin_yan) – $3,000

πŸ“š Task 2: Commonsense Persona Knowledge Linking
πŸ₯‡ 1st Place: @biu_biu – $5,000
πŸ₯ˆ 2nd Place: Team test_team (@wangxiao, @yiyang_zheng) – $3,000
πŸ₯‰ 3rd Place: @TieMoJi – $2,000

πŸ”¦ Winner Spotlight

🌟 @ni_kai_hua – 1st in Task 1

Background: AI graduate from the University of Leeds with experience at Augmentum and CareerBuilder.
Expertise: AI, deep learning, language dynamics.

πŸ—οΈ Solution Highlights

  • Fine-tuned an LLM on curated persona datasets
  • Crafted precise prompts embedding persona traits
  • Tuned attention windows for dialogue coherence
  • Built a custom evaluation loop for feedback refinement
  • Integrated ethical safeguards to ensure responsible output

πŸ”§ Key Techniques
Fine-tuning + prompt engineering synergy, privacy-aware generation, fallback mechanisms for uncertainty.

πŸ‘‰ Read More


🌟 @wangzhiyu918 – 2nd in Task 1

Team: Zhiyu Wang, Puhong Duan, Zhuojun Xie, Wang Liu, Bin Sun, Xudong Kang, Shutao Li
Background: PhD candidate at Hunan University focused on vision-language and multi-modal LLMs.

πŸ—οΈ Solution Highlights

  • Inspired by ChatGPT-style prompting techniques
  • Applied clarity, conciseness, and relevance principles
  • Designed and tested high-performing prompt templates

πŸ”§ Key Techniques
Prompt-focused strategy refined with GPT-4, iterative design loop.

πŸ‘‰ Read More


🌟 Team justsnail (@jiayu_liu, @kevin_yan) – 3rd in Task 1

Background: Kaidi Yan is a software engineer focused on C++ and LLMs.

πŸ—οΈ Solution Highlights

  • Minimalist, strategic prompts to simulate dialogue flow
  • Special prompt design for openers
  • Single unified user prompt for full dialogue history
  • Lightweight post-processing for fluency and completeness

πŸ”§ Key Techniques
Strategic brevity + adaptive prompt design for generalisation.

πŸ‘‰ Read More


🌟 @biu_biu – 1st in Task 2

Background: NLP practitioner with a focus on dialogue systems and commonsense reasoning.

πŸ—οΈ Solution Highlights

  • Ran baseline evaluation on hidden test sets
  • Augmented dataset using GPT-3.5-Turbo to generate 20,000+ synthetic conversations
  • Fine-tuned DeBERTa-V3 with thorough hyperparameter tuning
  • Modelled head/tail fact prediction paths separately and jointly

πŸ”§ Key Techniques
Synthetic data generation + modern transformer architectures for knowledge grounding.

πŸ‘‰ Read More


🌟 @TieMoJi – 3rd in Task 2

Background: AI algorithm engineer at NetEase Games (NLP and deep learning).

πŸ—οΈ Solution Highlights

  • Translated structured persona facts into natural language triples
  • Framed task as sentence-triple correlation
  • Used multi-prompt training and model fusion for robust learning

πŸ”§ Key Techniques
Template-based structuring, mixed precision training, and class-balanced resampling.

πŸ‘‰ Read More


🌟 @yiyang_zheng – 2nd in Task 2

Team: Yiyang Zheng (Shanghai University), Yingwu Yang (ML in finance)

πŸ—οΈ Solution Highlights

  • Fine-tuned Phi-2 for efficient multi-turn reasoning
  • Focused on implicit persona alignment and ambiguous fact handling
  • Balanced between fluency and commonsense-grounded accuracy

πŸ”§ Key Techniques
Compact LLM selection (Phi-2), subtle context reasoning, and fine-tuned generalisability.

πŸ‘‰ Read More