To fully unlock the potential of generative AI, customization is crucial. On our blog, we showcase the latest advancements in Microsoft Azure AI.
Artificial intelligence has profoundly transformed our approach to tackling complex problems and fostering innovative thinking across diverse sectors. With the capacity to generate coherent images and articulate language in a manner indistinguishable from humans, these models have demonstrated remarkable potential. To unlock the full potential of these energies, customized approaches are essential. We are proud to announce that we will soon be rolling out a series of exciting new customization updates.
- Availability of fine-tuning for Azure OpenAI Service’s GPT-4o and GPT-4o mini models is now normal.
- Access to the latest fashion trends is now combined with advanced technologies such as Phi-3.5-MoE, Phi-3.5-vision via serverless endpoint, Meta’s Llama 3.2, SDAIA’s ALLaM-2-7B, and up-to-date Command R and Command R+ from Cohere.
- As we continue to advance and expand our enterprise offerings, exciting new capabilities are emerging, paving the way for seamless collaboration and unparalleled productivity.
-
Accountable AI options are now available, complemented by a groundbreaking feature in our groundedness detection capability, which enables thorough evaluation of output quality and safety.
- Community-driven isolation and personalized endpoint support enable the development and customization of generative AI applications within the Azure AI Studio ecosystem.
What’s holding you back from unlocking the facility of customized LLMs with Azure AI?
As customization of Large Language Models (LLMs) has become increasingly trendy, our customers can now leverage the unparalleled benefits of state-of-the-art generative AI models, seamlessly integrating them with their unique proprietary knowledge and domain expertise to unlock unparalleled value. Tremendous advancements have been made in fine-tuning, transforming it into a sought-after approach for creating tailored large language models (LLMs): faster, more cost-effective, and highly reliable compared to developing training schemes from the ground up?
Azure AI proudly provides cutting-edge tooling for clients to refine and tailor their models across the Azure OpenAI Service, including the Phi family of models, as well as the extensive library of over 1,600 models in the model catalog. Now available, we are thrilled to announce the ultimate milestone: fine-tuning is now accessible for all GPT-4o and GPT-4o mini models, effective as of. Following a successful preview, these trends are now fully available for customers to refine. Additionally, we have enabled a diverse range of fashion households.
Regardless of whether you’re optimizing for specific industries, ensuring model voice consistency, or improving response accuracy across various languages, GPT-4o and GPT-4o mini offer robust solutions to meet your needs.
Lionbridge, a pioneer in translation automation, has been at the forefront of embracing innovation by leveraging Azure OpenAI Service, achieving enhanced translation accuracy through fine-tuning.
“For several years, our team at Lionbridge has been tracking the comparative effectiveness of accessible translation automation methods.” Having been an early adopter of GPTs on a large scale, we’ve successfully refined numerous generations of GPT models with impressive results. We are delighted to expand our collection of premium fashion designs with the launch of GPT-4o and GPT-4o mini on the Azure OpenAI Service platform. Our findings indicate that finely tuned GPT models consistently outperform both baseline GPT and Neural Machine Translation engines across languages such as Spanish, German, and Japanese in terms of translation accuracy. As we unveil these cutting-edge styles, we’re poised to further enhance our AI-powered translation services, ensuring seamless integration with clients’ unique linguistic nuances and aesthetic tastes.
Since its inception in 1996 as a pioneering force within the realm of AI-driven healthcare solutions, Nuance, a subsidiary of Microsoft, has made significant strides in developing innovative healthcare offerings, starting with groundbreaking scientific advancements in speech-to-text automation specifically designed for the healthcare sector. Immediately, Nuance leverages generative AI to revolutionize patient care with unprecedented precision and efficiency. Anuj Shroff, Nuance’s normal supervisor of scientific options, emphasized the impact of generative AI on customization.
Nuance has long recognized the value of fine-tuning AI models to deliver highly specialized and accurate solutions for our healthcare clients, driving meaningful improvements in patient care. With the highly anticipated release of GPT-4o and GPT-4o mini on Azure OpenAI Service now a reality, we’re thrilled to further enhance our AI-powered businesses. The significant milestone in AI-powered healthcare arrives with the ability to customize GPT-4o’s features for specific workflows—a game-changer, according to Anuj Shroff, Nuance’s Normal Supervisor of Scientific Options.
For buyers seeking affordable solutions with minimal computational overhead and seamless integration across diverse edge devices, the refined approach of Phi-3 SLM tuning has yielded impressive results. Khan Academy has recently showcased the impressive results of its fine-tuned Phi-3 model, which surpasses other models in detecting and correcting student math errors.
A Platform for High-Quality Personalization
Wonderful-tuning encompasses far more than just coaching methods. Across industries, from the dawn of the knowledge era to cutting-edge mannequin analysis, Azure offers a comprehensive platform that seamlessly integrates data and AI capabilities with scalability, security, and reliability. As part of our ongoing efforts to empower developers with cutting-edge technology, we have recently showcased a tutorial on leveraging Azure AI for building customized, domain-adapted models that can be seamlessly integrated into various applications and solutions.
We’re hosting an online event to unpack the essentials and practical recipes for getting started with fine-tuning. Join us in our endeavour to expand your knowledge and learn more?
Increasing mannequin selection
The OpenFashion database boasts an impressive inventory of over 1,600 fashion styles, offering the most comprehensive collection for building generative AI applications. Azure AI’s current offerings enable developers to quickly prototype and explore the most suitable model for their specific use case through various frameworks available.
We are pleased to announce the arrival of fresh mannequin stock.
- Developed as a Combination-of-Consultants (MoE) model, deployed via a serverless endpoint and further enhanced through integration with GitHub Features. With a team of 16 seasoned consultants and boasting an impressive 6.6 billion energetic parameters, the system offers multilingual capabilities, aggressive performance, and robust security features. With 4.2 billion parameters now available through managed compute, users can perform reasoning across numerous input images, unlocking novel applications such as detecting discrepancies between pictures.
- .
Llama’s inaugural multi-modal fashions are now accessible through managed computing within the Azure AI model catalog, offering unparalleled flexibility and scalability. Serverless computing’s inferencing capabilities are rapidly evolving.
- . This state-of-the-art mannequin has been specifically engineered to enable seamless understanding of natural language in both Arabic and English dialects. With approximately seven billion parameters, the ALLaM-2-7B model aims to excel as a vital software solution for industries demanding exceptional language processing abilities.
- . Recognized for their robustness (RAG) with comprehensive citations, multilingual support in over 10 languages, and seamless workflow automation, the latest updates deliver enhanced effectiveness, affordability, and user experience. The significant advancements in coding, mathematics, reasoning, and latency are particularly notable, with Command R emerging as the fastest and most efficient model available.
Obtain AI transformation with confidence
Recently, we announced a comprehensive package of commitments and capabilities aimed at developing AI that is not only safe but also secure and private. Knowledge privacy and security, two core pillars of Reliable AI, form the foundation for designing and implementing innovative solutions. To help organizations meet regulatory and compliance demands, the Azure OpenAI Service, an integral part of the Azure ecosystem, offers robust enterprise controls, empowering teams to build with precision and assurance. As we move forward, we will implement robust enterprise controls to further enhance knowledge privacy and security capabilities, which we are excited to announce will be available soon. With the introduction of Knowledge Zones, a pioneering feature that leverages the robust capabilities of Azure OpenAI Service’s knowledge processing and storage options, Azure OpenAI Service now offers customers the flexibility to deploy their models in World, Knowledge Zone, or regional modes, enabling them to store data at rest within the chosen region of their Azure resource. We’re eager to efficiently communicate this information to our valued clients.
Additionally, we have recently unveiled a new range of capabilities, including private access to storage via managed virtual networks (VNET), supporting Azure AI Search, Azure AI services, and the Azure OpenAI Service. Builders can engage in secure conversations about their organization’s knowledge using private endpoints within the chat environment. Secured isolation of a private community ensures that external entities cannot access its assets and resources. Clients can now streamline their workflow by enabling Entra ID for secure, credential-free access to Azure AI Search, Azure AI companies, and Azure OpenAI Service connections within the Azure AI Studio environment.
These critical safety features are essential for enterprise clients, particularly those in regulated industries that rely on sensitive data for model fine-tuning or retrieval in Augmented Reality (AR) workflows.
With privacy and safety concerns, security stands at the forefront of our considerations. To further reinforce our commitment to responsible AI practices, we introduced Azure AI Content Security in 2023, empowering the development of safer and more transparent generative AI models through built-in guardrails. Constructing upon this work, options including immediate shields and guarded material detection are enabled by default and available at no additional cost within Azure OpenAI Service. These capabilities could be harnessed to develop content filters, leveraging models from our catalogue, including Phi-3, Llama, and Cohere models. Additionally, we’ve expanded Azure AI Content Security with:
- To proactively assist in repairing hallucination points in real-time before they are visible to customers, now available for preview.
- To facilitate the detection of pre-existing content and code. This characteristic facilitates builders’ discovery of publicly available open-source code within GitHub repositories, thereby promoting collaboration and transparency while empowering more informed coding decisions.
Ultimately, we developed a solution to help clients evaluate the quality and security of AI-generated content, as well as ensure the typical methods by which their AI utility safeguards confidential materials.
Get began with Azure AI
As a product builder, it’s exhilarating yet humbling to collaborate with clients on conveying AI enhancements, coupled with fashion, customization, and security options, ultimately witnessing the tangible transformations they drive. Regardless of whether a Large Language Model (LLM) or Small Language Model (SLM), tailoring a generative AI framework enables its full potential, allowing organisations to tackle specific challenges and drive innovation in their respective domains? Craft a compelling digital future with Azure AI?