The Conference on Applied Machine Learning for Information Security (CAMLIS) recently took place in Arlington, Virginia, with a keynote speaker and additional presentations during a more relaxed “poster session” throughout the event. The subjects they focus on directly align with the core objective of SophosAI’s analysis – identifying straightforward ways to leverage machine learning and artificial intelligence technologies to mitigate data security threats while safeguarding against risks tied to AI models themselves?
On October 24, SophosAI’s experts, including Ben Gelman, Sean Bergeron, and Younghoo Lee, will present throughout a poster session. Gelman and Bergeron are set to deliver a webinar titled “Revitalizing Small Cybersecurity Trends in the Age of Artificial Intelligence”.
While large language models like OpenAI’s GPT-4, Google’s Gemini, and Meta’s LLaMA have received significant attention in recent research, smaller machine learning architectures have often been overlooked. While language models remain crucial for securing data at community boundaries and endpoints, their computational and community costs render them unfeasible.
Gelman and Bergeron will delve into strategies for harnessing large language model (LLM) expertise to revolutionize the coaching process for smaller models, highlighting SophosAI’s innovative approaches that enable small, budget-friendly models to excel in various cybersecurity tasks at an astonishingly high level.
In a forthcoming presentation, Lee will discuss “A fusion of large language models and lightweight machine learning for efficient phishing email detection.” With adversaries increasingly utilizing large language models to generate convincing, targeted phishing emails featuring unique textual patterns and leveraging previously unseen domains to evade conventional spam and phishing defenses, Lee investigated how these models can be leveraged to counter them – and how they can be combined with traditional smaller machine learning models to enhance efficiency.
Within the strategic framework Lee outlines in his research, large language models (LLMs) could be leveraged to identify and flag potential malicious activities, such as sender spoofing and deceptive domain names. By combining Large Language Models (LLMs) with lightweight machine learning architectures, it’s possible to boost phishing detection accuracy while transcending the limitations inherent in relying solely on individual models.
During the second day of the CAMLIS event, SophosAI’s Tamás Vörös is set to deliver a talk on his research into neutralizing malicious large language models (LLMs) – designs that incorporate embedded backdoors or malware intended to be triggered by specific inputs.
The presentation, titled “Collective Backdoor Activations in LLMs: A Cautionary Tale,” showcases the perils of leveraging “black box” language models (by revealing how the SophosAI group injected their own managed Trojans into popular frameworks) and “noising” strategies that can be employed to circumvent pre-existing Trojan activation directives.