Azure AI Unveils Fine-Tuning and New Model Support

NewsAzure AI Unveils Fine-Tuning and New Model Support

To truly harness the power of generative AI, customization is key. In this blog post, we will share the latest updates from Microsoft Azure AI.

AI has revolutionized problem-solving and creativity across various industries. From generating realistic images to crafting human-like text, AI models have shown immense potential. However, to fully leverage their capabilities, customization is crucial. We are excited to announce new customization updates on Microsoft Azure AI, including:

  • General availability of fine-tuning for Azure OpenAI Service GPT-4o and GPT-4o mini.
  • Availability of new models including Phi-3.5-MoE, Phi-3.5-vision through serverless endpoints, Metaā€™s Llama 3.2, The Saudi Data and AI Authority (SDAIA)ā€™s ALLaM-2-7B, and updated Command R and Command R+ from Cohere.
  • New capabilities that enhance our enterprise offerings, including the upcoming availability of Azure OpenAI Data Zones.
  • New responsible AI features including Correction, a feature in Azure AI Content Safetyā€™s groundedness detection, new evaluations to assess the quality and security of outputs, and Protected Material Detection for Code.
  • Full Network Isolation and Private Endpoint Support for building and customizing generative AI applications in Azure AI Studio.

    bCLO20b_Sylvie_office_night_001

    Azure OpenAI Service


    Build your own copilot and generative AI applications

    Unlock the Power of Custom LLMs with Azure AI


    Customization of large language models (LLMs) has become a popular way for users to leverage the power of top-tier generative AI models combined with the unique value of proprietary data and domain expertise. Fine-tuning has emerged as the preferred method to create custom LLMs: it is faster, more cost-effective, and more reliable than training models from scratch.

    Azure AI is proud to offer tools that enable customers to fine-tune models across Azure OpenAI Service, the Phi family of models, and over 1,600 models in the model catalog. We are thrilled to announce the general availability of fine-tuning for both GPT-4o and GPT-4o mini on Azure OpenAI Service. After a successful preview, these models are now fully available for customer fine-tuning. Weā€™ve also enabled fine-tuning for SLMs with the Phi-3 family of models.

    Azure OpenAI Service fine-tuning GPT-4o

    Whether youā€™re optimizing for specific industries, enhancing brand voice consistency, or improving response accuracy across different languages, GPT-4o and GPT-4o mini provide robust solutions to meet your needs.

    Lionbridge, a leader in translation automation, has been an early adopter of Azure OpenAI Service and has used fine-tuning to enhance translation accuracy.

    ā€œAt Lionbridge, we have been tracking the relative performance of available translation automation systems for many years. As a very early adopter of GPTs on a large scale, we have fine-tuned several generations of GPT models with very satisfactory results. Weā€™re thrilled to now extend our portfolio of fine-tuned models to the newly available GPT-4o and GPT-4o mini on Azure OpenAI Service. Our data shows that fine-tuned GPT models outperform both baseline GPT and Neural Machine Translation engines in languages like Spanish, German, and Japanese in translation accuracy. With the general availability of these advanced models, weā€™re looking forward to further enhancing our AI-driven translation services, delivering even greater alignment with our customersā€™ specific terminology and style preferences.ā€ā€”Marcus Casal, Chief Technology Officer, Lionbridge.

    Nuance, a Microsoft company, has been a pioneer in AI-enabled healthcare solutions since 1996, starting with the first clinical speech-to-text automation for healthcare. Today, Nuance continues to leverage generative AI to transform patient care. Anuj Shroff, General Manager of Clinical Solutions at Nuance, highlighted the impact of generative AI and customization:

    ā€œNuance has long recognized the potential of fine-tuning AI models to deliver highly specialized and accurate solutions for our healthcare clients. With the general availability of GPT-4o and GPT-4o mini on Azure OpenAI Service, weā€™re excited to further enhance our AI-driven services. The ability to tailor GPT-4oā€™s capabilities to specific workflows marks a significant advancement in AI-driven healthcare solutionsā€ā€”Anuj Shroff, General Manager of Clinical Solutions at Nuance.

    For customers focused on low costs, small compute footprints, and edge compatibility, Phi-3 SLM fine-tuning is proving to be a valuable approach. Khan Academy recently published a research paper showing their fine-tuned version of Phi-3 performed better at finding and fixing student math mistakes compared to other models.

    A Platform for Customization Quality


    Fine-tuning is about much more than just training models. From data generation to model evaluation, and support for scaling your custom models to production workloads, Azure provides a unified platform: data generation via powerful LLMs, AI Studio Evaluation, built-in safety guardrails for fine-tuned models, and more. As part of our GPT-4o and 4o-mini now generally available, weā€™ve recently shared an end-to-end distillation flow for retrieval augmented fine-tuning, showing how to leverage Azure AI for custom, domain-adapted models.

    We are hosting a webinar on October 17, 2024, to discuss the essentials and practical recipes to get started with fine-tuning. We hope you will join us to learn more.

    Expanding Model Choice


    With over 1,600 models, Azure AI model catalog offers the broadest selection of models to build generative AI applications. Azure AI models are now also available through GitHub Models so developers can quickly prototype and evaluate the best model for their use case.

    We are excited to share new model availability, including:

  • Phi-3.5-MoE-instruct, a Mixture-of-Experts (MoE) model and Phi-3.5-vision-instruct through serverless endpoints and also through GitHub Models. Phi-3.5-MoE-instruct, with 16 experts and 6.6B active parameters, provides multi-lingual capability, competitive performance, and robust safety measures. Phi-3.5-vision-instruct (4.2B parameters), now available through managed compute, enables reasoning across multiple input images, opening up new possibilities such as detecting differences between images.
  • Metaā€™s Llama 3.2 11B Vision Instruct and Llama 3.2 90B Vision Instruct. These models are Llamaā€™s first-ever multi-modal models and are available via managed compute in the Azure AI model catalog. Inferencing through serverless endpoints is coming soon.
  • SDAIAā€™s ALLaM-2-7B. This new model is designed to facilitate natural language understanding in both Arabic and English. With 7 billion parameters, ALLaM-2-7B aims to serve as a critical tool for industries requiring advanced language processing capabilities.
  • Updated Command R and Command R+ from Cohere available in Azure AI Studio and through GitHub Models. Known for their expertise in retrieval-augmented generation (RAG) with citations, multilingual support in over 10 languages, and workflow automation, the latest versions offer better efficiency, affordability, and user experience. They feature improvements in coding, math, reasoning, and latency, with Command R being the fastest and most efficient model yet.

    Achieve AI Transformation with Confidence


    Earlier this week, we unveiled Trustworthy AI, a set of commitments and capabilities to help build AI that is secure, safe, and private. Data privacy and security, core pillars of Trustworthy AI, are foundational to designing and implementing new solutions. To help meet regulatory and compliance standards, Azure OpenAI Serviceā€”an Azure service, provides robust enterprise controls so organizations can build with confidence. We continue to invest to expand enterprise controls and recently announced the upcoming availability of Azure OpenAI Data Zones to further enhance data privacy and security capabilities. With the new Data Zones feature that builds on the existing strength of Azure OpenAI Serviceā€™s data processing and storage options, Azure OpenAI Service now provides customers with options between Global, Data Zone, and regional deployments, allowing customers to store data at rest within the Azure chosen region of their resource. We are excited to bring this to customers soon.

    Additionally, we recently announced full network isolation in Azure AI Studio, with private endpoints to storage, Azure AI Search, Azure AI services, and Azure OpenAI Service supported via managed virtual network (VNET). Developers can also chat with their enterprise data securely using private endpoints in the chat playground. Network isolation prevents entities outside the private network from accessing its resources. For additional control, customers can now enable Entra ID for credential-less access to Azure AI Search, Azure AI services, and Azure OpenAI Service connections in Azure AI Studio. These security capabilities are critical for enterprise customers, particularly those in regulated industries using sensitive data for model fine-tuning or retrieval augmented generation (RAG) workflows.

    In addition to privacy and security, safety is top of mind. As part of our responsible AI commitment, we launched Azure AI Content Safety in 2023 to enable generative AI guardrails. Building on this work, Azure AI Content Safety featuresā€”including prompt shields and protected material detectionā€”are on by default and available at no cost in Azure OpenAI Service. Further, these capabilities can be leveraged as content filters with any foundation model included in our model catalog, including Phi-3, Llama, and Cohere. We also announced new capabilities in Azure AI Content Safety including:

  • Correction to help fix hallucination issues in real time before users see them, now available in preview.
  • Protected Material Detection for Code to help detect pre-existing content and code. This feature helps developers explore public source code in GitHub repositories, fostering collaboration and transparency, while enabling more informed coding decisions.

    Lastly, we announced new evaluations to help customers assess the quality and security of outputs and how often their AI application outputs protected material.

    Get Started with Azure AI


    As a product builder, it is exciting and humbling to bring new AI innovations to customers, including models, customization, and safety features, and to see the real transformation that customers are driving. Whether an LLM or SLM, customizing a generative AI model helps to boost its potential, allowing businesses to address specific challenges and innovate in their respective fields. Create the future today with Azure AI.

    Additional Resources


    For more information and updates, visit the Azure AI blog.

For more Information, Refer to this article.
Neil S
Neil S
Neil is a highly qualified Technical Writer with an M.Sc(IT) degree and an impressive range of IT and Support certifications including MCSE, CCNA, ACA(Adobe Certified Associates), and PG Dip (IT). With over 10 years of hands-on experience as an IT support engineer across Windows, Mac, iOS, and Linux Server platforms, Neil possesses the expertise to create comprehensive and user-friendly documentation that simplifies complex technical concepts for a wide audience.
Watch & Subscribe Our YouTube Channel
YouTube Subscribe Button

Latest From Hawkdive

You May like these Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.