Overview
This course is designed for individuals at both an intermediate and beginner level, including data scientists, AI enthusiasts, and professionals seeking to harness the power of Azure for Large Language Models (LLMs). Tailored for those with foundational programming experience and familiarity with Azure basics, this comprehensive program takes you through a four-week journey. In the first week, you'll delve into Azure's AI services and the Azure portal, gaining insights into large language models, their functionalities, and strategies for risk mitigation. Subsequent weeks cover practical applications, including leveraging Azure Machine Learning, managing GPU quotas, deploying models, and utilizing the Azure OpenAI Service. As you progress, the course explores nuanced query crafting, Semantic Kernel implementation, and advanced strategies for optimizing interactions with LLMs within the Azure environment. The final week focuses on architectural patterns, deployment strategies, and hands-on application building using RAG, Azure services, and GitHub Actions workflows. Whether you're a data professional or AI enthusiast, this course equips you with the skills to deploy, optimize, and build robust large-scale applications leveraging Azure and Large Language Models.
Syllabus
- Introduction to LLMOps with Azure
- In this module, you will learn how to get started with Azure and its AI services through an introduction to the Azure portal, and key offerings like Azure Machine Learning. You will also gain an understanding of large language models, including how they work, their benefits and risks, and strategies for mitigating those risks. Finally, you will be introduced to options for discovering, evaluating, and deploying pre-trained LLMs in Azure, including leveraging prompt engineering for responsible data grounding.
- LLMs with Azure
- In this module, you will learn to leverage Azure for Large Language Models (LLMs) by using Azure Machine Learning through its compute resources and managing GPU quotas and model deployments as well as Azure OpenAI Service. You will apply this knowledge by deploying a model and using its inference API using the Python programming language.
- Extending with Functions and Plugins
- In this module, you will discover the art of crafting nuanced queries for Large Language Models (LLMs) in Azure through the implementation of Semantic Kernel. You will gain insights into refining prompts, understand the dynamics of using system prompts, and explore advanced strategies to optimize your interaction with LLMs. You will apply these techniques hands-on to enhance your proficiency in leveraging Semantic Kernel within the Azure environment.
- Building an End-to-End LLM application in Azure
- In this module, you will explore architectural patterns and deployment of large language model applications. By studying RAG, Azure services, and GitHub Actions, you will learn how to build robust applications. You will apply your learning by implementing RAG with Azure search, creating GitHub Actions workflows, and deploying an end-to-end application.
Taught by
Noah Gift, Alfredo Deza and Derek Wales