DialogCC: An Automated Pipeline for Creating High-Quality Multi-Modal Dialogue Dataset

1School of Computing, KAIST 2NAVER Vision 3NAVER Cloud Multimodal AI

Abstract

As sharing images in an instant message is a crucial factor, there has been active research on learning an image-text multi-modal dialogue models. However, training a well-generalized multi-modal dialogue model remains challenging due to the low quality and limited diversity of images per dialogue in existing multi-modal dialogue datasets. In this paper, we propose an automated pipeline to construct a multi-modal dialogue dataset, ensuring both dialogue quality and image diversity without requiring minimum human effort. In our pipeline, to guarantee the coherence between images and dialogue, we prompt GPT-4 to infer potential image-sharing moments - specifically, the utterance, speaker, rationale, and image description. Furthermore, we leverage CLIP similarity to maintain consistency between aligned multiple images to the utterance. Through this pipeline, we introduce DialogCC, a high-quality and diverse multi-modal dialogue dataset that surpasses existing datasets in terms of quality and diversity in human evaluation. Our comprehensive experiments highlight that when multi-modal dialogue models are trained using our dataset, their generalization performance on unseen dialogue datasets is significantly enhanced.

Proposed Pipeline

method

In order to construct DialogCC, we introduce an automatic pipeline, which consists of three steps: (1) collecting, (2) aligning, and (3) filtering.

  • Collecting: We collect five multi-turn text-only social dialogue datasets (i.e., Persona-Chat, EmpatheticDialogues, Wizard-of-Wikipedia, DailyDialog, BlendedSkillTalk) and Conceptual Captions 3M (CC3M).
  • Aligning: After collecting source datasets, to ensure image-dialogue coherence, we ask GPT-4 to infer all possible image-sharing moments via zero-shot prompting and leverage the CLIP to increase the aligned image relevancy.
  • Filtering: We eliminate inappropriate images based on CLIP similarity for image-image consistency.

DialogCC

dialogcc

Findings

Finding 1: DialogCC contributes to the model’s robustness.

method

Although the scale of DialogCC is significantly smaller than MMDialog (83K vs. 1M), DialogCC contributes to the model’s understanding of the unseen dialogue dataset on both tasks. This suggests that increasing the quality of the dataset is more important than the scale.

Finding 2: DialogCC improves the comprehension of the interaction between dialogue and images.

method

The model trained on DialogCC outperforms those trained on other datasets. This indicates that DialogCC significantly improves the model’s comprehension of the interaction between dialogue and images, even when the imagegrounded dialogue datasets encompass various patterns in multi-modal dialogue scenarios. This improvement is attributed to DialogCC’s high-quality and diverse images, underscoring the reliability of our pipeline.

BibTeX

TBD