Friday, April 4, 2025

Only 32% of safety professionals believe AI will make their jobs easier.

According to a recent survey conducted by HackerOne, a leading safety analytics platform, nearly half (48%) of safety professionals believe that AI represents the greatest threat to their team’s safety. One of the primary concerns related to embracing AI technology:

  • Leaked coaching information (35%).
  • Unauthorized utilization (33%).
  • The risk of unauthorized access to and manipulation of AI fashion designs by external parties.

As fears of AI-related vulnerabilities mount, there is a pressing need for companies to re-examine and strengthen their AI safety protocols before these weaknesses become tangible risks.

AI-generated insights often lead to misleading alarms for safety teams.

While the comprehensive Hacker Powered Safety Report’s findings won’t be publicly available until later this fall, preliminary insights from a recent study suggest that 58% of safety experts believe safety groups and threat actors may engage in an “arms race” to harness the potential of generative AI in their respective spheres of operation.

According to a SANS survey, 71% of safety professionals reported successfully leveraging artificial intelligence to streamline and automate mundane tasks. Notwithstanding the identical members’ acknowledgement, they also warned that malicious actors might leverage AI to enhance their nefarious activities. Respondents have demonstrated a high level of involvement with AI-driven phishing campaigns (79%), as well as automatic vulnerability exploitation (74%), indicating a notable reliance on these tactics.

According to Matt Bromiley, an analyst at the SANS Institute, safety groups must identify the ideal applications of AI to keep pace with adversaries while also acknowledging its current limitations – lest they inadvertently create more work for themselves.

The answer? Artificial intelligence implementations require a comprehensive external review. Overwhelmingly, nearly seven in ten survey respondents (68%) opted for the “exterior overview” as the most effective means of fortifying AI security and ensuring safety protocols.

According to Dane Sherrets, Senior Options Architect at HackerOne, groups are becoming increasingly realistic about AI’s current limitations, a sentiment he notes has intensified over the past year. While AI excels in many areas, it cannot fully substitute for the nuance and essential contextual information that people bring to defensive and offensive safety strategies? Concerns about hallucinations have also led to a reluctance to apply this technology in high-stakes applications. While AI has its advantages in increasing productivity and handling tasks lacking complex nuances,

The latest discoveries from the SANS 2024 AI Survey, which commenced this month, include:

  • Approximately 38% of companies intend to integrate AI into their security protocols in the future.
  • Despite advancements in AI-powered cybersecurity, a significant 38.6 percent of respondents acknowledged experiencing limitations when leveraging AI to identify or respond to emerging cyber threats.
  • Approximately 40% of respondents identify concerns over the ethical and moral implications as a significant barrier to the widespread adoption of artificial intelligence.
  • While 41.8% of companies have encountered resistance from employees skeptical about AI-driven decisions, SANS posits that this opposition stems from a perceived lack of transparency in AI’s decision-making processes.
  • Approximately 43 percent of companies currently incorporate artificial intelligence within their overall security systems.
  • Artificial intelligence expertise within safety operations is predominantly leveraged for anomaly detection methods, where it is employed by 56.9% of organizations, followed closely by the utilization of AI in malware detection (50.5%) and automated incident response (48.9%).
  • Only 42% of respondents reported that AI techniques were effective in detecting new threats and responding to outlier indicators, largely due to a lack of adequate training data, according to SANS.
  • Seventy-one percent of respondents who experienced drawbacks when leveraging artificial intelligence to identify or respond to cybersecurity incidents cited AI’s propensity to produce inaccurate alarms as the primary shortcoming.

Anthropic invites submissions from experts in AI and cybersecurity to explore innovative solutions for securing artificial intelligence systems.

The generative AI developer Anthropic widened its scope on the HackerOne platform in August.

The company specifically requires a hacker group to rigorously test and validate “the safeguards designed to prevent the misuse of our products,” as well as attempt to breach the guardrails intended to prevent AI from generating recipes for explosive devices or cyberattacks. Anthropic is offering up to $15,000 in rewards for individuals who successfully develop new jailbreaking attacks and can provide early access to HackerOne’s security researchers on its next security mitigation system?

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles