The Computational Common Ground project focuses on improving AI and Natural Language Processing (NLP) dialogue modeling. The current common ground tracking (CGT) model identifies the shared belief space among all participants in a task-oriented dialogue, which is crucial for successful collaboration and communication. The project has been annotating multimodal classroom interactions to train CGT models to achieve two main goals: (1) to recognize the current set of shared beliefs, and (2) to predict future developments in common ground, as they pertain to the joint task being solved by the agents. By providing a more informative snapshot of the dialogue situation after each action in a collaborative task, the CGT model aims to develop a policy that incorporates not just past dialogue history but also the shared beliefs of all participants.
This project focuses on understanding how dense paraphrasing can be effectively implemented in natural language processing tasks. The research explores various algorithms and models for improving paraphrasing performance.