One touted advantage of synthetic intelligence’s proliferation is its potential to revolutionize construction by automating mundane tasks. Despite initial optimism, fresh research suggests that safety authorities are now hesitant to embrace the use of AI in coding, with a staggering 63% advocating for a ban due to the inherent risks involved.
A staggering 92% of decision-makers surveyed express enthusiasm for leveraging AI-generated code within their organization. Key factors influencing the quality of outcome are all closely tied to a significant discount.
Artificial intelligence models may be trained on outdated open-source libraries, leading to developers becoming overly reliant on tools that simplify their workflow, ultimately resulting in subpar code being perpetuated throughout the company’s products.
While safety leaders acknowledge the possibility of AI-generated code being used in certain contexts, they remain skeptical about its ability to withstand the same level of scrutiny as hand-written code, which is typically subject to much more rigorous testing and verification procedures. Without human oversight, builders may struggle to hold AI models accountable, thereby reducing the pressure to ensure exceptional quality.
TechRepublic recently sat down with Tariq Shaukat, CEO of code safety agency Sonar, to discuss his concerns regarding the growing trend of companies leveraging AI to author their code.
“Typically, such issues arise from insufficient code reviews, stemming either from the company’s failure to implement robust code quality standards and review processes, or from developers being too lenient when examining AI-generated code, applying a lower bar for critique compared to their own handiwritten code.”
When asked about buggy AI, the usual retort is: “It’s not my code,” implying a lack of accountability due to non-authorship.
According to a groundbreaking report from Venafi, a leading provider of machine identity administration solutions, a comprehensive study was conducted among 800 security decision-makers across the United States, United Kingdom, Germany, and France. According to recent findings, a staggering 83% of organizations have already adopted the use of AI in developing code, with this trend being observed across more than half of entities, despite concerns raised by safety professionals.
“Emerging threats, recalling instances of AI poisoning and rogue AI systems, are gaining momentum as developers and newcomers leverage vast waves of generative AI code in unforeseen ways.”
While some have advocated for a ban on AI-assisted coding, 72% of respondents believe they have no viable alternative but to allow its continued use in order to keep their company competitive. By 2028, a staggering 90% of enterprise software development professionals are expected to leverage AI-powered coding assistants, ultimately leading to significant productivity gains throughout their work processes.
Sleepless nights besetting safety experts
According to Venafi’s report, nearly two-thirds of respondents concede that keeping pace with hyper-productive developers while ensuring the security of their products is impossible. Furthermore, 66% reveal that they struggle to govern the secure use of AI across their organisation due to a lack of visibility into where it’s being utilized.
As a result, chief security officers are deeply concerned about the consequences of overlooking potential weaknesses, prompting 59% to lose sleep over the issue? As concerns mount, nearly eight in ten experts predict that the widespread adoption of AI-generated code will ultimately lead to a safety crisis, which would precipitate a comprehensive overhaul of the way such incidents are handled following a catastrophic event.
Boček pointed out that safety teams find themselves stuck between a hard place and a difficult spot in a brave new world where AI generates code. Are builders destined to remain supercharged by AI, unwilling to relinquish their newfound powers? The threat landscape is evolving rapidly, with malicious actors successfully inserting themselves into open-source projects and even orchestrating sophisticated cyberattacks from rogue nations like North Korea.