Generative AI sparked intense discussions at the ISC2 Safety Congress convention in Las Vegas in October 2024. The rapid evolution of generative AI is poised to revolutionize the cyber arms race, with both attackers and defenders confronting unprecedented opportunities and challenges.
Alex Stamos, Chief Information Security Officer (CISO) at SentinelOne and a professor of computer science at Stanford University, spoke with TechRepublic about today’s most pressing cybersecurity concerns and how artificial intelligence (AI) can both aid and hinder attackers. Discover innovative strategies for maximizing the impact of Cybersecurity Awareness Month.
As smaller or mid-sized businesses confront overwhelming cyber threats
The vast majority of businesses struggle to cope with any level of adversity they face. Small and medium-sized enterprises are facing a formidable financial foe, one that has honed its tactics by targeting larger corporations. They continually hone their skills by conducting daily hacking exercises to infiltrate corporate systems. They’ve developed a notable proficiency in their chosen endeavour.
By the time they infiltrate a 200-person organization or a small regional hospital, they are extremely skilled. Within the safety industry, we have yet to successfully develop and deploy safety products suitable for small, regional hospitals.
The disparity between the available ability units for rental or construction and the enemy forces encountered in each stage poses a significant challenge to most businesses operating at scale. You’ll be able to construct effective groups, but taking action on the scale required to defend against high-end adversaries like Russia’s SVR or China’s PLA and MSS, which you’d face when dealing with a geopolitically significant risk, is extremely challenging. So at each stage, you’ve acquired a certain degree of incongruity.
For many organizations, defenders of intellectual property and proprietary information have a significant advantage when leveraging generative AI capabilities.
Proper now, AI has been a internet optimistic for defenders as a result of defenders have spent the cash to do the R&D. Our founders pioneered a revolutionary approach by leveraging artificial intelligence and machine learning to develop a more effective detection method that departed from traditional signature-based approaches. By leveraging generative AI technology, we are able to drive significant operational efficiencies within Security Operations Centers (SOCs). To fully utilize our console’s capabilities, one doesn’t necessarily need extensive education on how to formulate complex queries like “Show me all computers that downloaded a new software piece within the last 24 hours.” Instead, users can easily ask this same question in plain English. As a result, initial gains in efficiency and reduced costs are being reaped by defensive-minded teams.
While attackers have begun embracing this approach without fully realising its advantages, what’s truly unsettling is that they’re just getting started on this journey, with most of the benefits yet to be reaped. To date, the primary applications of GenAI’s outputs have been focused on facilitating human learning and development. Here’s an improved version:
A key aspect of generative AI (GenAI) is its ability to produce seemingly endless authentic English text. This capability is particularly notable in the context of large language models or diffusion models for image generation, where the output area of issues that a language model can generate appears virtually infinite. The output area of various exploits that a CPU will execute is extremely limited.
Despite advancements in artificial intelligence, one of the persistent challenges GenAI faces is producing well-structured outputs. Without a doubt, one of the most concentrated zones of analytical scrutiny lies in the structured inputs and outputs of artificial intelligence systems, precisely where their efficacy is rigorously tested and validated. While AI has numerous potential applications, it is crucial to establish clear boundaries and parameters to ensure its outputs are reliable and effective. By refining AI’s ability to process structured data and generate consistent results, we can unlock a wide range of authentic uses that benefit society.
Presently, GenAI is primarily leveraged for crafting convincing phishing lures and facilitating linguistic barriers in negotiations with ransomware actors who do not communicate in certain languages. However, I firmly believe that our genuine worry should revolve around the prospect of AI developing proficiency in generating sophisticated exploit code. When you can introduce a novel bug into an AI system, it may generate exploit code that is effective against.
The skills essential for crafting that code currently reside with just a handful of individuals. For individuals who intend to integrate such technology into a General Artificial Intelligence (GenAI) model, potentially utilised by tens of thousands or hundreds of thousands of skilled cybersecurity professionals, this could represent a substantial leap forward in offensive capabilities.
In the destination ahead, special attention is required in regards to hyper-automation and orchestration. While AI is used in conditions where it’s still supervised by people, it shouldn’t be that dangerous. While using AI to generate questions that prompt self-reflection isn’t necessarily revolutionary, it can still be a valuable tool for personal growth and development. If you’re instructing AI to identify every machine meeting specific criteria and subsequently separate them, that raises concerns about potential consequences. As a consequence of potentially creating circumstances where such mistakes could occur. If artificial intelligence develops the capacity to make autonomous decisions, it could potentially become extremely hazardous. While many people may intuitively recognize this reality, While human SOC (Security Operations Center) analysts are renowned for their expertise and vigilance, they, like any professionals, are not immune to making mistakes.
Cybersecurity isn’t just about protecting against threats; it’s also an adventure waiting to happen.
During Cybersecurity Awareness Month, it’s uniquely opportune to conduct phishing exercises. Individuals perpetuating phishing scams irreparably damage the trust between cybersecurity professionals and the public. As part of Cybersecurity Awareness Month, I enjoy making cybersecurity engaging and gamifying its importance by incorporating exciting challenges and offering enticing rewards at the conclusion.
We successfully executed a notable project, dubbed Hacktober, at Facebook. We offered a range of exciting incentives, including coveted prizes, engaging video games, and stylish t-shirts. The company maintained separate rankings for technical and non-technical leaders. The technical team would likely be tasked with identifying and addressing any defects or issues in the system. Everyone can participate in the non-technical aspect.
If you received our phishing emails and also participated in our quizzes or other activities, you may be eligible to join in on the fun and potentially earn rewards.
By incorporating gamification elements, we can make the experience more enjoyable and engaging, rather than simply punitive and difficult? Safety groups have no business being in a location like this.
I firmly agree that safety organizations must genuinely convey to individuals the genuine risks we’re facing, acknowledging our shared vulnerability.