As international locations prepare for key elections in this era defined by artificial intelligence, citizens will become primary targets of hacktivists and nation-state actors seeking to disrupt the democratic process.
Generative AI hasn’t fundamentally altered the dynamics of content dissemination, but rather significantly amplified its pace and influenced its precision?
According to Allie Mellen, principal analyst at Forrester Research, the know-how has enabled malicious actors to create more sophisticated phishing emails at scale, targeting sensitive information about specific candidates or elections. Mellon’s analysis delves into the intersection of safety operations, nation-state threats, and the application of machine learning and artificial intelligence in security tools. Their team is closely tracking the prevalence of misinformation and disinformation throughout 2024.
The Mellon Foundation prioritizes the crucial role of its efforts in safeguarding against the proliferation of misinformation and disinformation.
According to a study conducted by Yubico and Defending Digital Campaigns, nearly 79% of American voters expressed concern over the use of AI-generated content to masquerade as a politician or generate false information. Forty-three percent of respondents expressed concerns that this type of content could negatively impact the outcome of next year’s elections. Conducted by market research firm OnePoll, a survey of 2,000 registered US voters examined the impact of cybersecurity and artificial intelligence (AI) on the 2024 election’s marketing efforts.
Respondents who were provided with an audio clip generated by an artificial intelligence (AI) voice reported that 41% of them perceived the voice as human. Around half of respondents reported receiving an email or text message claiming to be from a marketing campaign, but they suspected it was actually a phishing attempt.
As the presidential election approaches, the threat of cyberattacks targeting candidates, staff members, and anyone connected to a campaign has reached unprecedented levels, according to Defending Digital Campaigns President and CEO Michael Kaiser. “Implementing robust cybersecurity measures will no longer be a choice but a necessity for anyone involved in political operations.” Campaigns often put at risk not just disseminating valuable information but also alienating potential voters.
Citing the foundation of campaigns in belief, David Treece, Vice President of Options Structure at Yubico, highlighted during the launch that potential hacks, akin to phishing emails or deepfake videos on social media that seamlessly interact with their audience, Treece implored potential politicians to take proactive measures to safeguard their election campaigns and implement robust cybersecurity protocols, thereby fostering trust with constituents.
As humans remain the last line of defense against disinformation, heightened public awareness of false content can prove crucial, according to Mellen’s insight for ZDNET.
She underscored the need for tech companies to recognize that securing elections is no longer solely a government concern, but a pressing national issue that requires collective consideration from all stakeholders across the industry.
Governance was deemed a top priority, according to her. While not every attempted deepfake or social engineering attack can be accurately identified, the company’s robust gating mechanisms and procedures will effectively prevent an employee from making a financial transfer to an external entity.
“Ultimately, it’s crucial to tackle the root cause of the problem, rather than just treating its symptoms,” Mellen said. “To ensure seamless transactions, we must prioritize the establishment of robust governance and multi-layered validation processes to verify their legitimacy.”
Simultaneously, she emphasized the importance of continually refining our capacities to identify deepfakes and generative AI-generated fraudulent content.
While most attackers exploiting generative AI technology are nation-state entities, other malicious parties predominantly focus on tried-and-tested tactics rather than innovating. Nation-state malicious actors are particularly driven to maximise the impact of their attacks, prompting them to leverage innovative technologies and tactics to compromise systems they wouldn’t otherwise be able to access? Warning that some actors may potentially emerge from within.
As humans, our greatest strength lies in our creativity and ability to adapt, but simultaneously, we are vulnerable to manipulation through emotional appeals.
According to Nathan Wenzler, chief safety strategist at Tenable, nation-state actors will likely intensify their attempts to exploit people’s trust through misinformation and disinformation, with his agreement on this sentiment.
While his team has not identified any novel safety threats this year following the rise of generative AI, Wenzler noted that the technology has empowered attackers to achieve unprecedented scale and scope.
Nation-state actors exploit the public’s trusting nature by leveraging online information and its willingness to accept it as fact, with generative AI enabling them to disseminate content aligned with their goals, according to Wenzler in his discussion with ZDNET.
According to Wenzler, the AI-driven capabilities to craft convincingly realistic phishing emails and deepfakes have effectively propelled social engineering into a potent launching pad for cyberattacks.
Cyber-defence instruments have become increasingly effective at plugging technical vulnerabilities, significantly enhancing the resilience of IT systems against compromise. They grasp the reality that he poses as a threat, and accordingly, they are choosing a more refined objective.
Because the technical expertise required to disrupt systems will increasingly prove challenging to overcome, individuals are inadvertently making themselves more susceptible to disruption, and GenAI represents another significant advancement in this regard, noted the expert. With this technology, social engineers can optimize their attacks, achieving higher efficiency and better results while reducing the time spent on creating malicious content.
Cybercriminals may send out 10 million phishing emails, and a mere 1% increase in crafting compelling content that persuades targets to click could result in the entrapment of an additional 100,000 victims, according to him.
It’s all about pace and scale. “Accordingly, GenAI is poised to become a formidable tool for crafting sophisticated social engineering attacks.”
Governments must strike a balance between harnessing the benefits of generative AI and addressing its potential risks.
Wenzler stressed that stakeholders must be thoroughly engaged in order to achieve desired outcomes. “It’s another attack on faith.” It’s actually fascinating to delve into human psychology. People require faith in what they perceive, which means they need to envision each other. While societal norms may not always encourage scrutiny of the status quo, it is essential that individuals remain vigilant and question what they observe to foster progress and understanding. The competition is intensifying with the rise of generative AI. Deepfakes are getting extremely good.”
“We’re still far from having a comprehensive plan in place, he admitted, acknowledging that any attempted remedy would come too late to undo the damage, which has already been inflicted, and some residents may take time to realize their perceptions were misguided.”
Ultimately, safety corporations will develop instruments that can effectively address this issue by creating an automated protection infrastructure, much like those designed for deepfake detection, according to him.
Giant language fashions want safety
Organisations must also be cognisant of the information utilised to train AI models.
Mellen cautions that coaching data within massive language models (LLMs) necessitates rigorous vetting and safeguards against malevolent attacks, mirroring the vulnerability of information poisoning. Tainted AI fashions are capable of producing inaccurate and misleading results.
According to Sergy Shykevich, manager of Verify Level Software’s menace intelligence group, there are also concerns surrounding large language models (LLMs), including their potential to empower more advanced AI systems supporting major platforms such as Google and Meta.
Nation-state actors may exploit these vulnerabilities to gain access to the engines and manipulate the responses generated by generative AI platforms, warned Shykevich in an interview with ZDNET. The spread of misinformation and disinformation on social media will have far-reaching consequences, ultimately influencing public perceptions and potentially altering the trajectory of electoral outcomes.
Without imposing regulations on large language models (LLMs), he emphasized the need for corporate accountability in securing these platforms and demanded greater transparency from entities operating them.
As generative AI is still an emerging technology, directors may struggle to navigate its complexities and understand the inner workings of response generation, notes Mellen.
Wenzler has highlighted that companies can effectively minimize risks by leveraging smaller, more focused, and bespoke large language models (LLMs) to manage and safeguard the data used to train their generative AI applications?
While larger datasets offer benefits, truly successful companies carefully assess their risk tolerance and strike the optimal balance.
Wenzler called on governments to act swiftly and establish the necessary frameworks to address the risks surrounding generative AI. These guidelines outline the key considerations for adopting and deploying generative AI for various purposes, as mentioned.