Artificial Intelligence has surprisingly demonstrated a remarkable aptitude for mimicking human-like conversational abilities with uncanny precision. New research indicates that AI’s capabilities of imitation have been expanded, enabling the duplication of specific individuals’ personalities.
People are sophisticated. Our beliefs, character traits, and decision-making methods are shaped by the intricate interplay between nature and nurture, gradually evolving over time through the unique tapestry of life events that have touched us.
But despite our efforts, we often find ourselves blending in rather than standing out as expected. Scientists at Stanford University have discovered that AI models require just a two-hour conversation to accurately predict human responses to complex psychological tests and questionnaires, boasting an impressive accuracy rate of 85%.
While the notion of cloning individuals’ personalities might seem unsettling at first glance, researchers suggest that this approach could ultimately serve as a means of simulating diverse response patterns in the face of varying media coverage options?
“What we have now is the chance to create fashions of people that might be really actually high-fidelity,” says Stanford’s Joon Sung Park, who led the research. “We can construct an agent of an individual that captures a whole lot of their complexities and idiosyncratic nature.”
While AI was primarily employed to generate digital likenesses of research team members, it also played a crucial role in gathering vital training data. Researchers acquired an OpenAI GPT-4o voice-enabled model to conduct scripted interviews with individuals, leveraging a tool developed by the American Voices Project – a social science endeavour focused on collecting diverse opinions from US households.
The researchers also enabled the model to proactively pose follow-up inquiries in response to users’ answers, allowing for a more dynamic and interactive dialogue process. A professional interview panel surveyed 1,052 Americans nationwide over a two-hour period, resulting in individualized transcripts for each participant.
Using this data, the scientists developed GPT-4-powered AI intermediaries that responded to queries in a way that mirrored how a human respondent would behave. When agents responded to queries, they were provided with the entire transcript of each interview, accompanied by the relevant question, and the mannequin was tasked with replicating the participant’s responses.
The researchers put brokers and human investors in direct competition by conducting a range of tests to assess their respective methods. The dataset comprised diverse components, including the Common Social Survey, which gauges societal attitudes across various aspects; a validation tool assessing individual ratings against predetermined standards; a selection of video games evaluating financial decision-making; and several social science experiments.
Individual responses often vary significantly depending on the context and timing of exams, potentially skewing comparisons with AI models. To validate consistency, participants were asked to complete the survey twice, with a two-week interval between administrations.
As the workforce analyzed AI-generated responses against initial human inputs, agents achieved an accuracy rate of approximately 69 percent. Notwithstanding the variance in responses across classes, research findings revealed a fashion model achieving an impressive accuracy rate of 85%.
Hassaan Raza, CEO of Tavus, a pioneer in creating “digital twins” of consumers, revealed a startling finding from his organization’s research: it takes surprisingly little data to craft credible replicas of real people. Typically, Tavus requires a substantial collection of emails and diverse data to develop their artificial intelligence (AI) replicas.
“What’s strikingly refreshing about this experience is the restraint shown in presenting just the right amount of information,” he said. Why don’t we schedule a 30-minute discussion with the AI interviewer today and again tomorrow morning? Following which we utilize that data to construct a precise digital replica of yourself.
The potential creation of lifelike AI replicas of individuals may yield a valuable tool for policymakers, suggests Richard Whittle from the University of Salford in the UK, who notes that AI-focused teams could be more cost-effective and efficient than those comprised of humans.
It’s alarming to consider how this technology could potentially be exploited for harmful purposes, highlighting the urgent need for responsible innovation and strict regulations. Deepfakes have already been exploited to convincingly impersonate a high-ranking government official in a sophisticated. The ability to accurately emulate a goal’s entirety would undoubtedly accelerate such endeavors significantly.
As experts predict, advanced artificial intelligence systems capable of simulating human-like interactions in diverse contexts are expected to emerge soon.