Collaborative Reasoner: Self-improving Social Agents with Synthetic Conversations
Ansong Ni, Ruta Desai, Yang Li, Xinjie Lei, Dong Wang, Ramya Raghavendra, Gargi Ghosh, Daniel Li, Asli Celikyilmaz
NeurIPS 2025
With increasingly more powerful large language models (LLMs) and LLM-based agents tackling an ever-growing list of tasks, we envision a future where numerous different LLM agents work seamlessly with other AI agents or humans facilitating everyday life in myriad ways from problem-solving to planning, knowledge gathering, and learning. To navigate through various scenarios in different social context, such LLM agents needs to possess collaborative skills, such as effective communication, theory-of-mind, and yet such skills are often ignored in the predominant single-turn evaluation of LLMs. Moreover, improving collaborative skills of these social agents would require large amount of conversational data, which are often expensive and less controllable thus hard to collect. To bridge the gap between problem-solving and social collaboration skills of LLMs, we present Collaborative Reasoner(Coral), a framework to evaluate and improve collaborative reasoning skills of language models. In particular, tasks and metrics in Coral necessitates agents to disagree to incorrect solutions, convince their partner of a correct solution, and ultimately agree as a team to commit to a final solution. Through the evaluation of Coral on 5 collaborative reasoning tasks, we show that current models cannot consistently utilize collaboration to achieve better task performance, and their social behaviors stemming from the current post-training process make them less desirable under collaborative scenarios. To improve collaborative reasoning capabilities of LLMs, we propose a self-improvement approach using synthetic interaction data, and to facilitate synthetic conversational data generation at scale, we built Matrix, a scalable and robust multi-agent communication framework. We leverage Coral and Matrix to synthesize supervised- and preference-finetuning data, based on the conversation turns that enable an agent to convince their partner on the correct solutions. Our self-improvement approach is shown to be effective on general, math, scientific and social reasoning tasks, yielding improvements up to 29.4% over chain-of-thought performance of an equivalent single agent LLM. We release code for Coral and Matrix for future research on collaborative social agents.