This week at Microsoft Ignite, we presented the latest advancements in Microsoft’s AI and Cloud technology. Over two action-packed days, Microsoft announced over 100 updates spanning generative AI infrastructure breakthroughs, new tools, and advancements in adoption, productivity, and security. These updates will be a gamechanger for anyone building with AI and showcase Microsoft Azure’s commitment to being the leading cloud platform for AI.
We’re so excited about the innovation that these updates will bring to the ecosystem, and so we’ve distilled the top 5 Microsoft AI announcements that every startup should be aware of:
- Microsoft AI supercomputer: More power, better pricing with Nvidia partnership, and new purpose-built silicon.
- Simplification of OSS LLM interactions: Streamlining interactions (Llama2, Mistral, Jais) through easy API calls
- Updates to Azure OpenAI Service: Enhancing Azure OpenAI Service with multi-modal capability, OpenAI GPT-4 Fine Tuning, OpenAI GPT-4 Turbo with Vision, and cost-effective pricing.
- New Vector Search and Semantic Ranking: Prioritizing precision in information retrieval
- Enhanced Responsible AI Tools: Facilitating the development of applications that build consumer and enterprise trust with advanced security measures and ethical considerations.
Azure AI Infrastructure – Azure is the world’s computer
Azure is powering a range of solutions, from cloud services to running the most sophisticated AI models. We’re excited to announce that we’ve added choice in price and performance across the Azure infrastructure technology stack, from the datacenter and the silicon that powers them, including:
- Custom-built silicon for AI and enterprise workloads in the Microsoft Cloud. This new custom silicon complements Microsoft’s offerings with industry partners. The two new chips, Microsoft Azure Maia and Microsoft Azure Cobalt, were built with a holistic view of hardware and software systems to optimize performance and price. Microsoft Azure Maia is an AI Accelerator chip designed to run cloud-based training and inferencing for AI workloads, such as OpenAI models, Bing, GitHub Copilot and ChatGPT. Microsoft Azure Cobalt is a cloud-native chip based on Arm architecture optimized for performance, power efficiency and cost-effectiveness for general purpose workloads.
- Azure Boost is now generally available. One of Microsoft Azure’s latest and most significant infrastructure improvements, Azure Boost, is now generally available. Azure Boost enables greater network and storage performance at scale, improves security, and reduces servicing impact by moving virtualization processes traditionally performed by the host servers, such as networking, storage and host management, onto purpose-built hardware and software optimized for these processes. This innovation allows Microsoft to achieve the fastest remote and local storage performances in the market today, with a remote storage performance of 12.5 Gbps (gigabits per second) throughput and 650K IOPS (input/output operations per second) and a local storage performance of 17.3 Gbps throughput and 3.8M IOPS.
- New VM and GPU offerings. New GPU offerings include the ND MI300 v5 virtual machines with AMD chips optimized for generative AI workloads, NC H100 v5 virtual machines with latest NVIDIA GPUs and the new NC H100 v5 Virtual Machine (VM) Series, in preview, is built on the latest NVL variant of the NVIDIA Hopper 100 (H100), which will offer greater memory per GPU. Those new VM series will provide customers with greater performance, reliability and efficiency for mid-range AI training and generative AI inferencing. By maintaining more memory per GPU in the VM, customers increase data processing efficiency and enhance overall workload performance.
Introducing Model-as-a-Service
Azure Machine Learning continues to improve user experiences with new enhancements, including the general availability of prompt flow and model catalog and the preview of an integration with OneLake in Microsoft Fabric, empowering startups and machine learning professionals to streamline the development of AI-powered applications and operationalize responsible generative AI solutions across all stages of the generative AI development lifecycle. One key new feature is the Model-as-a-Service capability in the model catalog which allows startups to to easily integrate the latest AI models, such as Llama 2 from Meta and upcoming premium models from Mistral, and Jais from G42, as API endpoints to their applications. Startups can also customize models with their own data without needing to worry about setting up and managing the GPU infrastructure, helping eliminate complexity and driving productivity which is immensely important for early-stage startups working in small teams.
New multi-model capabilities arrive in Azure OpenAI Service
This has been one of the most requested areas from startups in Founders Hub over the past few weeks, and we are super excited that it’s coming to Azure. The new OpenAI GPT-3.5 Turbo model with a 16K token prompt length will be generally available and OpenAI GPT-4 Turbo will be in public preview in Azure OpenAI Service at the end of November 2023. OpenAI GPT-4 Turbo will enable customers to extend prompt length and bring even more control and efficiency to their generative AI applications. Furthermore, OpenAI GPT-4 Turbo with Vision is coming soon to preview and DALL·E 3 is now available in public preview in Azure OpenAI Service, helping fuel the next generation of enterprise solutions along with OpenAI GPT-4, so startups can pursue advanced functionalities with images. And when used with our Azure AI Vision service, OpenAI GPT-4 Turbo with Vision even understands video for generating text outputs.
General Availability of Vector Search and Semantic Ranker
Azure AI Search (formerly known as Azure Cognitive Search) is an information search and retrieval platform that enables startups to deliver highly personalized experiences in their generative AI applications. Vector Search, a feature of Azure AI Search, is now generally available, so startups can generate highly accurate experiences for every user in their generative AI applications. Semantic ranker (formally known as semantic search) which includes multilingual, deep learning models adapted from Microsoft Bing, is also available and allows startups to prioritize and ensure the most relevant search results are delivered first.
Enhancing our commitment to Responsible AI
Finally, every startup building generative AI solutions is facing the new challenge of building offerings in a secured and responsible manner. Microsoft leads the industry in the safe and responsible use of AI. The company has set the standard with an industry-leading commitment to defend and indemnify commercial customers from lawsuits for copyright infringement – the Copilot Copyright Commitment (CCC). Today, Microsoft takes its commitment one step further by announcing the expansion of the CCC to customers using Azure OpenAI Service. The new benefit will be called the Customer Copyright Commitment. Also, Azure AI Content Safety is now generally available, helping startups further detect and mitigate harmful content and create better online experiences. Customers can use Azure AI Content Safety as a built-in-safety system within Azure OpenAI Service, for open-source models as part of their prompt engineering in Azure Machine Learning, or as a standalone API service.
Startups get free access to Azure AI through Microsoft for Startups
For a deeper dive into these services and the other ~95 announcements, Microsoft Ignite will be available on-demand here. Founders can leverage up to $150k in credits towards Azure AI services through the Microsoft for Startups Founders Hub program to test and explore these capabilities hands-on. Additionally, our AI Advisory team is here to help startups accelerate their AI journey through one-to-one consultations.
We’re thrilled to introduce these innovations from Microsoft Ignite 2023 and look forward to the exciting possibilities they bring.
Not yet joined Microsoft for Startups Founders Hub? Sign up today.