As one of many defining applied sciences of this century, synthetic intelligence (AI) appears to witness every day developments with new entrants to the sphere, technological breakthroughs, and artistic and revolutionary functions. The panorama for AI safety shares the identical breakneck tempo with streams of newly proposed laws, novel vulnerability discoveries, and rising menace vectors.
Whereas the velocity of change is thrilling, it creates sensible limitations for enterprise AI adoption. As our Cisco 2024 AI Readiness Index factors out, considerations about AI safety are often cited by enterprise leaders as a major roadblock to embracing the total potential of AI of their organizations.
That’s why we’re excited to introduce our inaugural State of AI Safety report. It gives a succinct, easy overview of a few of the most essential developments in AI safety from the previous 12 months, together with developments and predictions for the 12 months forward. The report additionally shares clear suggestions for organizations trying to enhance their very own AI safety methods, and highlights a few of the methods Cisco is investing in a safer future for AI.
Right here’s an outline of what you’ll discover in our first State of AI Safety report:
Evolution of the AI Risk Panorama
The speedy proliferation of AI and AI-enabled applied sciences has launched an enormous new assault floor that safety leaders are solely starting to deal with.
Threat exists at just about each step throughout your complete AI improvement lifecycle; AI belongings will be straight compromised by an adversary or discreetly compromised although a vulnerability within the AI provide chain. The State of AI Safety report examines a number of AI-specific assault vectors together with immediate injection assaults, knowledge poisoning, and knowledge extraction assaults. It additionally displays on the usage of AI by adversaries to enhance cyber operations like social engineering, supported by analysis from Cisco Talos.
Wanting on the 12 months forward, cutting-edge developments in AI will undoubtedly introduce new dangers for safety leaders to pay attention to. For instance, the rise of agentic AI which may act autonomously with out fixed human supervision appears ripe for exploitation. Then again, the scale of social engineering threatens to develop tremendously, exacerbated by highly effective multimodal AI instruments within the unsuitable arms.
Key Developments in AI Coverage
The previous 12 months has seen important developments in AI coverage, each domestically and internationally.
In the US, a fragmented state-by-state strategy has emerged within the absence of federal rules with over 700 AI-related payments launched in 2024 alone. In the meantime, worldwide efforts have led to key developments, such because the UK and Canada’s collaboration on AI security and the European Union’s AI Act, which got here into pressure in August 2024 to set a precedent for international AI governance.
Early actions in 2025 counsel better focus in direction of successfully balancing the necessity for AI safety with accelerating the velocity of innovation. Latest examples embody President Trump’s government order and rising help for a pro-innovation setting, which aligns nicely with themes from the AI Motion Summit held in Paris in February and the U.Okay.’s latest AI Alternatives Motion Plan.
Authentic AI Safety Analysis
The Cisco AI safety analysis workforce has led and contributed to a number of items of groundbreaking analysis that are highlighted within the State of AI Safety report.
Analysis into algorithmic jailbreaking of huge language fashions (LLMs) demonstrates how adversaries can bypass mannequin protections with zero human supervision. This method can be utilized to exfiltrate delicate knowledge and disrupt AI companies. Extra not too long ago, the workforce explored automated jailbreaking of superior reasoning fashions like DeepSeek R1, to show that even reasoning fashions can nonetheless fall sufferer to conventional jailbreaking methods.
The workforce additionally explores the security and safety dangers of fine-tuning fashions. Whereas fine-tuning is a well-liked technique for bettering the contextual relevance of AI, many are unaware of the inadvertent penalties like mannequin misalignment.
Lastly, the report evaluations two items of authentic analysis into poisoning public datasets and extracting coaching knowledge from LLMs. These research make clear how simply—and cost-effectively—a foul actor can tamper with or exfiltrate knowledge from enterprise AI functions.
Suggestions for AI Safety
Securing AI methods requires a proactive and complete strategy.
The State of AI Safety report outlines a number of actionable suggestions, together with managing safety dangers all through the AI lifecycle, implementing robust entry controls, and adopting AI safety requirements such because the NIST AI Threat Administration Framework and MITRE ATLAS matrix. We additionally have a look at how Cisco AI Protection may also help companies adhere to those finest practices and mitigate AI threat from improvement to deployment.
Learn the State of AI Safety 2025
Able to learn the total report? Yow will discover it right here.
We’d love to listen to what you suppose. Ask a Query, Remark Under, and Keep Related with Cisco Safe on social!
Cisco Safety Social Channels
Share: