OpenAI has banned a cluster of ChatGPT-adjacent accounts tied to an Iranian influence operation, which generated content focused on the US. Presidential election results are expected to be announced by Friday. The corporation claims to have produced AI-generated content for articles and social media platforms, but there is no indication that it garnered significant attention or viewership.
OpenAI has repeatedly taken action against state-backed entities that misuse its AI technology, specifically ChatGPT, in a previous instance as well. In May, the corporation disrupted five campaigns.
These episodes pay tribute to social media platforms such as Facebook and Twitter, with the goal of influencing earlier election cycles. As a consequence of lax regulations and technological advancements, identical teams or perhaps related ones have been exploiting generative AI to inundate social media platforms with disinformation. OpenAI seems to be employing a whack-a-mole strategy, swiftly taking down accounts linked to these initiatives as they emerge.
OpenAI’s investigation into this cluster of accounts was enhanced by a recent revelation, which identified the group (dubbed Storm-2035) as a component of a more extensive marketing effort aimed at influencing U.S. elections working since 2020.
Microsoft revealed that Storm-2035 is an Iranian group with multiple websites mimicking news outlets, which actively engage with US voter groups on opposing sides of the political spectrum by disseminating polarizing content on issues such as US presidential candidates, LGBTQ rights, and the Israel-Hamas conflict. The playbook, consistently employed across various operations, aims to fuel discord rather than promote a specific policy or agenda.
OpenAI identified five website fronts for Storm-2035, portraying progressive and conservative information outlets with convincing domains such as “evenpolitics.com”. The organization employed ChatGPT to author several long-form articles, including one claiming that “X censors Trump’s tweets”, which actually hasn’t occurred – if anything, Elon Musk’s platform is encouraging former president Donald Trump to engage more on X.
OpenAI identified over a dozen Twitter accounts and also detected one compromised Instagram account linked to this malicious operation. The corporation allegedly utilized ChatGPT to revise a multitude of political comments, which were subsequently published on various online platforms. A misleading tweet incorrectly attributed a quote to Kamala Harris, sparking the hashtag #DumpKamala, falsely linking “elevated immigration prices” to climate change.
OpenAI reported that it couldn’t verify the widespread dissemination of Storm-2035’s articles, noting that the vast majority of its social media posts garnered scant engagement, with few likes, shares, or comments. Typically, these operations are swift and economical to initiate with the aid of AI tools such as ChatGPT. As the election nears, expect a proliferation of provocative postings and increasingly heated online debates as partisans on both sides dig in their heels and amplify their rhetoric.