π CPDC 2023: Winners Announcement
Hello Participants,
Weβre excited to announce the tentative winners of the Commonsense Persona-Grounded Dialogue Challenge (CPDC) 2023! A big thank you to all who participated, experimented, and pushed the boundaries of persona-grounded dialogue systems. The challenge received hundreds of submissions, and after a rigorous evaluation, here are the top performers in each task.
π§ Task 1: Commonsense Dialogue Response Generation
π₯ 1st Place: @ni_kai_hua β $15,000
π₯ 2nd Place: @wangzhiyu918 β $7,000
π₯ 3rd Place: Team justsnail (@jiayu_liu, @kevin_yan) β $3,000
π Task 2: Commonsense Persona Knowledge Linking
π₯ 1st Place: @biu_biu β $5,000
π₯ 2nd Place: Team test_team (@wangxiao, @yiyang_zheng) β $3,000
π₯ 3rd Place: @TieMoJi β $2,000
π¦ Winner Spotlight
π @ni_kai_hua β 1st in Task 1
Background: AI graduate from the University of Leeds with experience at Augmentum and CareerBuilder.
Expertise: AI, deep learning, language dynamics.
ποΈ Solution Highlights
- Fine-tuned an LLM on curated persona datasets
- Crafted precise prompts embedding persona traits
- Tuned attention windows for dialogue coherence
- Built a custom evaluation loop for feedback refinement
- Integrated ethical safeguards to ensure responsible output
π§ Key Techniques
Fine-tuning + prompt engineering synergy, privacy-aware generation, fallback mechanisms for uncertainty.
π Read More
π @wangzhiyu918 β 2nd in Task 1
Team: Zhiyu Wang, Puhong Duan, Zhuojun Xie, Wang Liu, Bin Sun, Xudong Kang, Shutao Li
Background: PhD candidate at Hunan University focused on vision-language and multi-modal LLMs.
ποΈ Solution Highlights
- Inspired by ChatGPT-style prompting techniques
- Applied clarity, conciseness, and relevance principles
- Designed and tested high-performing prompt templates
π§ Key Techniques
Prompt-focused strategy refined with GPT-4, iterative design loop.
π Read More
π Team justsnail (@jiayu_liu, @kevin_yan) β 3rd in Task 1
Background: Kaidi Yan is a software engineer focused on C++ and LLMs.
ποΈ Solution Highlights
- Minimalist, strategic prompts to simulate dialogue flow
- Special prompt design for openers
- Single unified user prompt for full dialogue history
- Lightweight post-processing for fluency and completeness
π§ Key Techniques
Strategic brevity + adaptive prompt design for generalisation.
π Read More
π @biu_biu β 1st in Task 2
Background: NLP practitioner with a focus on dialogue systems and commonsense reasoning.
ποΈ Solution Highlights
- Ran baseline evaluation on hidden test sets
- Augmented dataset using GPT-3.5-Turbo to generate 20,000+ synthetic conversations
- Fine-tuned DeBERTa-V3 with thorough hyperparameter tuning
- Modelled head/tail fact prediction paths separately and jointly
π§ Key Techniques
Synthetic data generation + modern transformer architectures for knowledge grounding.
π Read More
π @TieMoJi β 3rd in Task 2
Background: AI algorithm engineer at NetEase Games (NLP and deep learning).
ποΈ Solution Highlights
- Translated structured persona facts into natural language triples
- Framed task as sentence-triple correlation
- Used multi-prompt training and model fusion for robust learning
π§ Key Techniques
Template-based structuring, mixed precision training, and class-balanced resampling.
π Read More
π @yiyang_zheng β 2nd in Task 2
Team: Yiyang Zheng (Shanghai University), Yingwu Yang (ML in finance)
ποΈ Solution Highlights
- Fine-tuned Phi-2 for efficient multi-turn reasoning
- Focused on implicit persona alignment and ambiguous fact handling
- Balanced between fluency and commonsense-grounded accuracy
π§ Key Techniques
Compact LLM selection (Phi-2), subtle context reasoning, and fine-tuned generalisability.
π Read More