In Could 2024, we launched our inaugural Accountable AI Transparency Report. We’re grateful for the suggestions we acquired from our stakeholders all over the world. Their insights have knowledgeable this second annual Accountable AI Transparency Report, which underscores our continued dedication to constructing AI applied sciences that individuals belief. Our report highlights new developments associated to how we construct and deploy AI methods responsibly, how we assist our prospects and the broader ecosystem, and the way we be taught and evolve.
The previous yr has seen a wave of AI adoption by organizations of all sizes, prompting a renewed focus on efficient AI governance in apply. Our prospects and companions are wanting to study how we’ve scaled our program at Microsoft and developed instruments and practices that operationalize high-level norms.
Like us, they’ve discovered that constructing reliable AI is sweet for enterprise, and that good governance unlocks AI alternatives. Based on IDC’s Microsoft Accountable AI Survey that gathered insights on organizational attitudes and the state of accountable AI, over 30% of the respondents observe the dearth of governance and threat administration options as the highest barrier to adopting and scaling AI. Conversely, extra than 75% of the respondents who use accountable AI instruments for threat administration say that they’ve helped with knowledge privateness, buyer expertise, assured enterprise choices, model popularity, and belief.
We’ve additionally seen new regulatory efforts and legal guidelines emerge over the previous yr. As a result of we’ve invested in operationalizing accountable AI practices at Microsoft for near a decade, we’re nicely ready to comply with these laws and to empower our prospects to do the identical. Our work right here isn’t finished, nevertheless. As we element within the report, environment friendly and efficient regulation and implementation practices that assist the adoption of AI know-how throughout borders are nonetheless being outlined. We stay centered on contributing our sensible insights to standard- and norm-setting efforts all over the world.
Throughout all these aspects of governance, it’s necessary to stay nimble in our strategy, making use of learnings from our real-world deployments, updating our practices to replicate advances within the state-of-the-art, and guaranteeing that we’re conscious of suggestions from our stakeholders. Learnings from our principled and iterative strategy are mirrored within the pages of this report. As our governance practices proceed to evolve, we’ll proactively share our recent insights with our stakeholders, each in future annual transparency studies and different public settings.
Key takeaways from our 2025 Transparency Report
In 2024, we made key investments in our accountable AI instruments, insurance policies, and practices to maneuver on the velocity of AI innovation.
-
- We improved our accountable AI tooling to offer expanded threat measurement and mitigation protection for modalities past textual content—like photos, audio, and video—and extra assist for agentic methods, semi-autonomous methods that we anticipate will characterize a major space of AI funding and innovation in 2025 and past.
- We took a proactive, layered strategy to compliance with new regulatory necessities, together with the European Union’s AI Act, and supplied our prospects with sources and supplies that empower them to innovate according to related laws. Our early investments in constructing a complete and industry-leading accountable AI program positioned us nicely to shift our AI regulatory readiness efforts into excessive gear in 2024.
- We continued to use a constant threat administration strategy throughout releases via our pre-deployment evaluation and pink teaming efforts. This included oversight and evaluation of high-impact and higher-risk makes use of of AI and generative AI releases, together with each flagship mannequin added to the Azure OpenAI Service and each Phi mannequin launch. To additional assist accountable AI documentation as a part of these opinions, we launched an inner workflow device designed to centralize the varied accountable AI necessities outlined within the Accountable AI Normal.
- We continued to offer hands-on counseling for high-impact and higher-risk makes use of of AI via our Delicate Makes use of and Rising Applied sciences staff. Generative AI purposes, particularly in fields like healthcare and the sciences, have been notable progress areas in 2024. By gleaning insights throughout circumstances and fascinating researchers, the staff supplied early steerage for novel dangers and rising AI capabilities, enabling innovation and incubating new inner insurance policies and pointers.
- We continued to lean on insights from analysis to tell our understanding of sociotechnical points associated to the most recent developments in AI. We established the AI Frontiers Lab to put money into the core applied sciences that push the frontier of what AI methods can do when it comes to functionality, effectivity, and security.
- We labored with stakeholders all over the world to make progress in the direction of constructing coherent governance approaches to assist speed up adoption and permit organizations of all types to innovate and use AI throughout borders. This included publishing a guide exploring governance throughout numerous domains and serving to advance cohesive requirements for testing AI methods.
Looking forward to the second half of 2025 and past
As AI innovation and adoption proceed to advance, our core goal stays the identical: incomes the belief that we see as foundational to fostering broad and helpful AI adoption all over the world. As we proceed that journey over the following yr, we’ll focus on three areas to progress our steadfast dedication to AI governance whereas guaranteeing that our efforts are conscious of an ever-evolving panorama:
- Creating extra versatile and agile threat administration instruments and practices, whereas fostering expertise growth to anticipate and adapt to advances in AI. To make sure individuals and organizations all over the world can leverage the transformative potential of AI, our means to anticipate and handle the dangers of AI should preserve tempo with AI innovation. This requires us to construct instruments and practices that may shortly adapt to advances in AI capabilities and the rising variety of deployment eventualities that every have distinctive threat profiles. To do that, we will make better investments in our methods of threat administration to offer instruments and practices for the most typical dangers throughout deployment eventualities, and likewise allow the sharing of check units, mitigations, and different greatest practices throughout groups at Microsoft.
- Supporting efficient governance throughout the AI provide chain. Constructing, incomes, and conserving belief in AI is a collaborative endeavor that requires mannequin builders, app builders, and system customers to every contribute to reliable design, growth, and operations. AI laws, together with the EU AI Act, replicate this want for data to stream throughout provide chain actors. Whereas we embrace this idea of shared duty at Microsoft, we additionally acknowledge that pinning down how obligations match collectively is advanced, particularly in a fast-changing AI ecosystem. To assist advance shared understanding of how this will work in apply, we’re deepening our work internally and externally to make clear roles and expectations.
- Advancing a vibrant ecosystem via shared norms and efficient instruments, significantly for AI threat measurement and analysis. The science of AI threat measurement and analysis is a rising however nonetheless nascent subject. We’re dedicated to supporting the maturation of this subject by persevering with to make investments inside Microsoft, together with in analysis that pushes the frontiers of AI threat measurement and analysis and the tooling to operationalize it at scale. We stay dedicated to sharing our newest developments in tooling and greatest practices with the broader ecosystem to assist the development of shared norms and requirements for AI threat measurement and analysis.
We stay up for listening to your suggestions on the progress we’ve made and alternatives to collaborate on all that’s nonetheless left to do. Collectively, we will advance AI governance effectively and successfully, fostering belief in AI methods at a tempo that matches the alternatives forward.
Discover the 2025 Accountable AI Transparency Report.