Date: 2024-03-22 / 3:30 ~ 4:30 PM
Location: MSC W303
The advancement of open-domain dialogue systems represents a significant goal of artificial intelligence, aiming to create more engaging and human-like interactions between machines and users. A key challenge in this domain is equipping these systems with a profound understanding of human experiences, the nuances of which are often subtly implied rather than explicitly stated in conversations. Social commonsense resources aim to support the comprehension of human experiences, including capturing commonsense knowledge about people's motivations, the causes of events, emotions, and more. However, the existing datasets and methodologies for social commonsense integration into dialogue applications suffer from low coverage, sparse detail, and contextual redundancy, thereby impeding their capability to promote meaningful dialogue interactions. Recognizing these limitations, this dissertation explores the enhancement of open-domain dialogue systems through improved integration of social commonsense knowledge.
This dissertation is structured around three core objectives: developing a reliable evaluation framework for assessing commonsense capability in dialogue models, creating a novel dataset of contextually novel commonsense inferences tailored for dialogue, and integrating these inferences into dialogue models to enhance their conversational abilities. The first objective is addressed through the introduction of Annotation of Behaviors in Chat Evaluation (ABC-Eval), a binary behavior-based evaluation framework that offers a more objective and grounded assessment of dialogue models' commonsense reasoning capabilities. The second objective is achieved with the development of ConvoSense, which is the largest dataset of its kind to provide novel commonsense inferences designed specifically for dialogue contexts. Finally, the third objective culminates in the presentation of Commonsense Inference Generate-Select-Respond (CSI-GSR), a novel approach that leverages the rich pool of commonsense inferences from ConvoSense to generate dialogue responses.
The findings of this dissertation highlight the current capabilities of LLM-based dialogue models and the benefits of incorporating predictive commonsense inferences for response guidance. The work on ABC-Eval reveals that commonsense errors are highly prevalent in neural dialogue systems, thus highlighting the importance of improving commonsense capabilities of dialogue models. The work on ConvoSense produces powerful resources and models for capturing multi-faceted and predictive social commonsense inferences for dialogue. The work on CSI-GSR showcases the utility of these multi-faceted and predictive social commonsense inferences for advancing response specificity to its dialogue context. Collectively, this body of work supports the pursuit of more nuanced, contextually aware, and intelligent human-computer interactions.