As we speak, we’re asserting an vital addition to the Databricks Ventures portfolio: Noma Safety, an rising chief in AI safety and governance. This funding is coupled with a brand new partnership: Noma Safety integrates into the Databricks Information Intelligence Platform to supply enhanced safety, governance, threat administration, and automatic safety coverage enforcement for enterprise AI and AI brokers. This collaboration underscores Databricks’ dedication to serving to our joint enterprise clients construct safe and trusted AI options.
Why Noma Safety
As enterprises race to harness the transformative energy of AI, considerations about safety and governance typically sluggish their progress. Databricks and Noma Safety are partnering to handle these challenges head-on, enabling organizations to ship safe AI from preliminary design by manufacturing deployment, and throughout your complete AI lifecycle from mannequin improvement to manufacturing use of agentic AI.
The Noma Safety platform is fast to deploy and designed to assist enterprises proactively put together for rising AI rules such because the EU AI Act, and obtain important certifications like ISO 42001, guaranteeing compliance in a quickly evolving panorama.
Noma Safety helps complete AI safety frameworks, together with:
Higher Collectively: Databricks and Noma Safety
Noma Safety integrates seamlessly with Databricks, enabling a complete strategy to securing the AI lifecycle. Our joint clients profit in a number of methods:
AI Discovery and Governance: Noma Safety enhances AI governance inside the Databricks surroundings by offering detailed visibility into all AI property. This consists of sustaining a complete stock of AI fashions, producing an in depth AI Invoice of Supplies (AIBOM), and offering enhanced oversight for AI methods. By way of higher understanding of AI parts, organizations can handle threat, guarantee compliance, and construct confidence of their AI deployments.
Safe AI by Design: Integrating Noma Safety with the Databricks Information Intelligence Platform allows the detection of AI threat earlier than runtime, offering clients with proactive AI threat administration by figuring out potential vulnerabilities early within the improvement course of. This integration streamlines AISecOps workflows, mannequin scanning, AI purple teaming, and misconfiguration detection to automate safety checks and coverage enforcement. With Noma Safety and Databricks, safety is an integral a part of the AI improvement and deployment pipeline.
AI Runtime Safety: Noma Safety delivers AI safety threat protection for Databricks AI to supply joint clients with a complete defend for any AI initiative. AI fashions are protected in opposition to runtime threats, constantly monitored for suspicious conduct, and ruled in response to rules and compliance mandates. This end-to-end strategy helps speed up accountable adoption of AI at scale.
Agentic AI Safety: As AI evolves towards extra complicated agentic architectures, the necessity for sturdy AI safety turns into much more vital. Noma Safety covers your complete AI lifecycle, from scanning MCP servers to runtime safety monitoring of agent reasoning and runtime actions. Our partnership facilitates efficient purple teaming for AI brokers, serving to our joint clients proactively establish and deal with potential weaknesses in agentic AI methods.
Accelerating Confidence in AI
We’re excited to announce this funding and partnership with Noma Safety. Collectively, we empower organizations to undertake AI extra shortly and confidently by mitigating the safety and governance dangers that usually trigger friction.
In case you are becoming a member of Databricks on the Information + AI Summit subsequent week in San Francisco, request an in-person assembly with the Noma Safety workforce to be taught extra about AI safety and governance.
Can’t attend the Information + AI Summit? Request a demo of Noma Safety, or register for this webinar on June 26 to find out how Databricks and Noma Safety ship safety for the enterprise AI lifecycle.