Teams comprising six contributors each were formed, with a single team member randomly selected to author collective statements on behalf of their group. A designated individual served as mediator in each round of deliberation, presenting both a statement from the human mediator and an AI-generated statement from the Human Mediator, asking participants to indicate their preference between the two.
More than half (57 percent) of respondents chose the AI-generated statement. Researchers found that these AI-generated statements surpassed those from human mediators in terms of quality, prompting stronger endorsements. Following deliberation facilitated by an AI mediator, the smaller teams comprising team members exhibited a significant reduction in their initial divisions regarding the issues at hand.
While AI programs excel at crafting summaries that accurately capture collective sentiments, it’s crucial to recognize the practical limitations of such technology, notes Joongi Shin, an expert on generative AI at Aalto University.
“Unless the scenario or context is transparently clear, enabling users to view raw input data rather than just relying on summary outputs, I believe such programs may raise ethical concerns,” he says.
Although Google DeepMind didn’t explicitly notify participants in their human mediator experiment that an AI system was generating collective opinion statements, they did mention on the consent form that algorithms were involved.
“While the current model has limitations, it’s crucial to recognize that its capacity for real-world deliberation is hampered by certain aspects,” Tessler notes. “For example, the proposed system lacks crucial features such as fact-checking capabilities, subject-matter focus, and discourse moderation.”
To ensure responsible and secure implementation, a thorough examination is needed to determine the optimal location and application method for this technology in the future. The corporation reportedly has no intention of releasing the mannequin to the public sphere.