Without question, no laptop can ever be made entirely secure unless it’s encased in a thick layer of impenetrable concrete, at least six feet deep. Notwithstanding careful consideration towards developing a multi-tiered security framework, data can indeed be safeguarded sufficiently to instill confidence in Fortune 500 companies utilizing it for generative AI purposes, suggests Anand Kashyap, CEO and co-founder of Fortanix.
Chief Information Security Officers (CISOs) and their peers in the C-suite often find themselves plagued by several pressing concerns related to GenAI. There is, for instance, the potential to interface with a publicly available large language model (LLM), akin to Gemini or GPT-4. The risk exists that this data could leak out of the large language model, potentially having unforeseen consequences.
While Retrieval Augmented Generation (RAG) may significantly mitigate these risks, it remains essential to safeguard embeddings stored in vector databases from unauthorized access. There are several key points to consider. Entry management remains a persistent challenge that can still arise in even the most meticulously designed security plans.
As GenAI’s significance grows, navigating its associated safety points becomes a pressing imperative for enterprises today, notes Kashyap in a recent interview with.
“Massive enterprises perceive the dangers. “They’re extremely cautious about deploying GenAI across their desired applications, yet simultaneously, they don’t want to risk missing out.” “Fears about being left behind are rampant.”
Develops cutting-edge instruments that safeguard sensitive information for top-tier organisations worldwide, including prominent names such as Goldman Sachs, VMware, NEC, GE Healthcare, and the Department of Justice. At its core, the company’s foundation is built on a confidential computing platform that leverages cutting-edge technologies like encryption and tokenization to empower clients to process sensitive data within a highly secure environment safeguarded by a robust hardware security module (HSM).
In alignment with Kashyap’s proposal, Fortune 500 companies can safely leverage GenAI by combining Fortanix’s confidential computing platform with other security tools, including role-based access control and a firewall equipped with real-time monitoring capabilities.
“A harmonious blend of role-based access control and confidential computing ensures the secure storage of multiple components within this AI pipeline, including the language model, vector database, and implementing real-time monitoring and policies – I firmly believe that this approach will provide unmatched data protection, far surpassing what is currently available on the market.”
A critical component for companies to integrate into their GenAI security framework is an information cataloging and discovery tool, capable of identifying sensitive data at inception and seamlessly incorporating new insights as they emerge.
Kashyap asserts that a harmonious blend of approaches, coupled with the safeguarding of the entire tech stack through confidential computing, would instill sufficient trust for even the most prominent Fortune 500 and government organizations to confidently deploy GenAI solutions.
Despite these assurances, there are always caveats that arise in the pursuit of safety. While Fortune 500 companies have been cautious in embracing GenAI technology recently, it’s largely due to the aftermath of several high-profile data breaches where sensitive information unexpectedly found its way into public models and was subsequently leaked. That’s what drives major corporations to err on the side of caution when it comes to deploying GenAI, ultimately greenlighting only the most critical chatbot and co-pilot use cases? As artificial intelligence (AI) capabilities continue to advance, these organizations will increasingly face pressure to optimize their processes and resource allocation to remain competitive.
While the most sensitive operations are deliberately sidestepping public language models due to the risk of information leakage, Kashyap notes that this is a pressing concern. By employing the RAG method, teams are empowered to safeguard sensitive data while keeping it close at hand, effectively limiting the need to disseminate unnecessary prompts. Despite this, a few organizations remain reluctant to adopt RAG methods, primarily due to the crucial requirement of securely safeguarding their vector databases, according to Kashyap. These organizations, as substitutes, construct and train their own large language models (LLMs), often employing open-source architectures like Facebook’s LLaMA-3 or Google’s models.
“If concerns about data leakage persist, consider running your own Large Language Model,” he advises. “Enterprises may find it challenging to adopt externally hosted large language models due to concerns over sensitive data. Instead, they can opt for an in-house solution that allows them to maintain control and visibility.”
Fortanix is expanding its GenAI security stack with the addition of an AI-powered firewall. According to Kashyap, the proposed resolution, currently lacking a timeline for supply, is likely to appeal to companies seeking to utilize a publicly accessible large language model (LLM) while maximizing its security features.
“For an effective AI firewall, a key component is a discovery engine that can thoroughly search for sensitive information, followed by a security engine capable of redacting, tokenizing, or applying reversible encryption to protect it,” Kashyap notes. “After achieving that, with knowledge of how to integrate it effectively into the community, your work is complete.”
Notwithstanding, AI firewalls won’t provide a perfect solution, he suggests; situations involving highly sensitive data will likely necessitate the team developing its own large language model (LLM) and running it in-house, according to him. Are firewall detections accurate, with minimal instances of both false positives and false negatives? He cautions that while trying to control every aspect is important, overdoing it can have unintended consequences. “It’s unlikely that these changes will address every possible situation.”
The emergence of GenAI is revolutionizing the landscape of information security on a massive scale, compelling businesses to reevaluate their strategies. The advent of innovative strategies, such as confidential computing, provides additional security layers that could embolden enterprises to progress confidently with GenAI technologies. Despite possessing the most advanced security technology, a company’s efforts will ultimately be for naught unless it takes basic measures to safeguard its data.
“The stark reality is that people aren’t even implementing basic encryption protocols for storing sensitive information,” Kashyap emphasizes. “As a consequence, an enormous amount of sensitive data will likely be compromised since it wasn’t even protected by encryption.” Some companies operate in conjunction with others. Many organizations lag significantly in terms of basic cybersecurity practices, neglecting crucial measures such as fundamental encryption to ensure the integrity of their digital assets. That was potentially the starting point. From there, you improve your safety standing and posture.