As societal divisions deepen, fostering consensus-building strategies has become increasingly crucial in today’s tumultuous times. New research indicates that AI has the potential to bridge the gap between people with vastly divergent perspectives, fostering a shared understanding and common ground.
The ability to collectively make informed decisions is crucial for a truly open and democratic society. In recent times, the art of deep thinking has languished, partly due to the divisive impact of technologies like social media.
New research from Google DeepMind implies that expertise could also provide a solution. Recently, the corporation has confirmed that its AI system, capable of leveraging various data sources, can effectively serve as a mediator in group discussions and facilitate the discovery of commonalities on contentious issues.
The research underscores AI’s capacity to amplify and enhance collective decision-making processes. “The AI-mediated approach demonstrates unparalleled efficiency, unwavering accuracy, and limitless scalability, consistently surpassing human mediators in critical performance metrics.”
The researchers were intrigued by philosopher Jürgen Habermas’ concept of communicative action, which suggests that, under optimal conditions, dialogue among rational actors can lead to consensus.
Developed was an AI tool capable of consolidating and combining the perspectives of a limited gathering of individuals into a unified statement. The language model was asked to optimize the overall satisfaction rating from the team as a whole. The group members then critiqued the assertion, using this feedback to inform a revised draft – a suggestions loop that iterated several times.
Researchers recruited approximately 5,000 UK-based participants via a crowdsourcing platform and grouped them into teams of six to test the methodology. Teams were asked to engage in debates on hot-button issues, including the controversial topic of potentially lowering the voting age from 18 to 16. Notably, the team taught one group member to record the collective statements and juxtapose them with the machine-generated outputs.
Staff findings revealed that contributors consistently favored AI-generated summaries by 56%, indicating a high level of proficiency in distilling group sentiment. The volunteers responded more positively to machine-generated texts, awarding higher scores and expressing stronger endorsements.
Notably, the researchers found that, following participation in the AI-mediated process, a statistically significant 8% average increase in group consensus was achieved. As discussions progressed, contributors revealed a notable shift in their perspective, aligning themselves with the collective viewpoint by the end of just 30% of the deliberation rounds.
The methodology was designed to empower teams in uncovering a unified common ground. The authors’ renowned ability lay in their skillful integration of dissenting opinions, striking a balance between representing minority viewpoints and maintaining the majority’s prominence.
Researchers convened a diverse panel of 200 UK participants for a digital “citizen’s meeting,” comprising three one-hour sessions. The research team engaged in a lengthy discussion on nine fiercely debated issues, ultimately observing a significant increase in collective agreement following their deliberations.
While the AI’s expertise still falls short of being mediated by a human expert, according to DeepMind’s Michael Henry Tessler. Without the necessary mediating features such as fact-checking capabilities, sticking to the topic, and actively moderating the discussion.
Notwithstanding Christopher Summerfield’s role as analysis director at the UK AI Security Institute, which spearheaded the challenge, the demonstrated expertise is now poised for practical implementation and will contribute a vital layer of sophistication to opinion polling methods.
Without crucial measures in place, such as initiating discussions with well-researched information and allowing group members to freely share their concerns, the process may inadvertently allow uninformed and hazardous opinions to infiltrate group statements. “I see incredible value in the artistry of conversation, even surpassing the most brilliant strategy,” said James Fishkin, a political scientist at Stanford University. But, unfortunately, there isn’t much conversation to be found here.”
While acknowledging the risk of further polarization, any knowledge or insights that facilitate nuanced discussions in today’s highly divided world should be encouraged and explored. While additional development may be necessary, dispassionate AI mediators have the potential to become a powerful tool in promoting global coherence and unity.