Friday, December 13, 2024

KI-basierte Systeme erlauben die Massenproduktion von personalisierten Fehlinformationen auf großem Scale – Sophos Information

The 2024 struggle is a year of elections, with approximately two billion potential voters. A delectable feast for political campaigns motivated to spread misinformation on a grand scale, which are increasingly refined in the digital world. Sophos X-Ops cybersecurity experts have shed light on the increasingly crucial role of generative AI technologies in enabling influential changes in the political sphere through disinformation, as documented in their report.

Large language and image models provide cybercriminals targeting political influence with effective tools for creating sophisticated, tailored content on a massive scale. These projects were previously challenging to realize and extremely labor-intensive, but the new technologies ensure that the risk posed by malicious actors has reached unprecedented dimensions.

In recent times, cases of criminals misusing artificial intelligence have increasingly emerged, including the use of generative texts to manipulate online platforms, the creation of misleading social media posts using generative images, and the deployment of “Deepfakes” in movies and voices for social engineering purposes. As political disinformation and deception campaigns increasingly employ the same instruments, concern grows about their impact on public discourse.

“In light of the theme’s relevance on a political level, it is particularly crucial to grasp the impact of new technologies on targeted misinformation,” said Ben Gelman, Senior Knowledge Scientist at Sophos. “In our analysis, we examine one of the largest emerging threats posed by maliciously deployed generative AI: tailored misinformation.” While a majority of recipients in a mass email campaign with misinformation, inconsistent with the content’s intent, opt out, microtargeting focuses solely on individuals who are most likely to agree with the disinformation. By doing so, an alarming new efficiency is created for such campaigns.

The current report is based on investigations conducted by Sophos X-Ops, which employed a specially developed tool designed for this purpose. This automated campaign generates a series of deceitful e-commerce promotions based on AI-generated texts, images, and audio files, enabling the creation of numerous convincing yet fraudulent online stores. Following a reconfiguration, the tool enabled the creation of websites for political campaigns with any desired features. By combining fake social media profiles with multiple campaign websites, researchers were able to craft persuasive, personalized emails that employ individual arguments to convincingly sway people to support a cause – even if they generally do not agree with such ideas? The precise approach to this large-scale micro-targeting strategy can be explored in English through various examples.

The capacity to produce finely tuned, personalized political content poses a significant risk of amplified misinformation, financial fraud, and the deepening of ideological polarization. To address this development, a multifaceted approach that combines technological, pedagogical, and legislative efforts is indispensable, as Gelman suggests.

From a technical perspective, continuous improvement of AI-generated content classifiers and fact-checkers can aid in identifying threats. In the education sector, strengthening public awareness of AI-generated content in conjunction with illegally obtained, personally identifiable data can significantly reduce the number of victims falling prey to scams – and, of course, how lawmakers handle the topic of artificial intelligence will also play a crucial role.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles