OpenAI revealed last Friday that its AI model, ChatGPT, had been exploited by an Iranian-affiliated disinformation operation. A team, designated as Storm-1679, produced content and online commentary to shape public perception surrounding Vice President Kamala Harris and former President Donald Trump. As a professional editor, I would improve the sentence as follows:
Focusing on the 2024 U.S. Presidential candidates have faced a new challenge: conversing with AI-generated responses from OpenAI’s Storm-1679 model, which has created content around Israel’s invasion of Gaza, its presence in the 2024 Olympics, the rights of U.S.-based Latinx communities, Venezuelan politics, and Scottish independence from the UK.
According to OpenAI, many of its acquired posts and articles garnered limited engagement from real people. Despite this, researchers have found that at least a dozen fake social media accounts, masquerading as both conservatives and progressives, were using hashtags similar to “#DumpTrump” and “#DumpKamala.” Furthermore, Storm-1679 was linked to at least one Instagram account spreading AI-generated content, according to OpenAI.
For the first time, OpenAI has publicly disclosed an instance of malicious activity on its platform involving ChatGPT, revealing that the AI tool was used to spread disinformation and manipulate public opinion in a specific election.
OpenAI responded to the discovery by banning a cluster of accounts responsible for generating the content, stating it also shared threat intelligence with authorities, campaigns, and industry stakeholders; though the specific parties involved were not named, the agency did provide screenshots of some offending posts. The screenshots showcased a range of view counts, spanning from 8 to 207, alongside an absence of likes.
According to OpenAI, Storm-1679 disseminated ChatGPT-generate articles across multiple websites, presenting itself as both liberal and conservative news outlets. Notably, the vast majority of social media posts linked to these articles garnered few or no interactions, including likes, shares, or comments. There was no evidence to suggest that the online articles were widely shared across social media platforms.
According to a Microsoft representative, Storm-2035 was likened to an Iranian community comprising four websites masquerading as information providers, which generated “polarizing” content focusing on the US presidential election, LGBTQIA+ rights, and Israel’s invasion of Gaza.
Foreign entities have been increasingly involved in influencing online narratives within the United States. elections are actually nearly commonplace. Microsoft’s August 6 report revealed a sophisticated Iran-linked phishing attack targeting a high-ranking US official, without disclosing the individual’s identity. marketing campaign official. Shortly after Microsoft released the report, the company took steps to mitigate its potential impact on the 2024 presidential election. In August 2016, Russian hackers known as Guccifer 2.0 infiltrated the Democratic National Committee’s (DNC) computer systems, making off with thousands of emails and documents, which they subsequently leaked online ahead of that year’s Democratic National Convention.
From lawmakers and giant technology companies, a multitude of initiatives have been undertaken over the years in response to such incidents. Their efforts encapsulate innovative partnerships, a bold blend of politics and artistry, and collaborations with like-minded individuals.