- Module 1: Learn how to deploy models to a managed online endpoint for real-time inferencing.
In this module, you'll learn how to:
- Use managed online endpoints.
- Deploy your MLflow model to a managed online endpoint.
- Deploy a custom model to a managed online endpoint.
- Test online endpoints.
- Module 2: Azure Machine Learning Python SDK v2.
In this module, you'll learn how to:
- Create a batch endpoint.
- Deploy your MLflow model to a batch endpoint.
- Deploy a custom model to a batch endpoint.
- Invoke batch endpoints.
Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Syllabus
- Module 1: Module 1: Deploy a model to a managed online endpoint
- Introduction
- Explore managed online endpoints
- Deploy your MLflow model to a managed online endpoint
- Deploy a model to a managed online endpoint
- Test managed online endpoints
- Exercise - Deploy an MLflow model to an online endpoint
- Knowledge check
- Summary
- Module 2: Module 2: Deploy a model to a batch endpoint
- Introduction
- Understand and create batch endpoints
- Deploy your MLflow model to a batch endpoint
- Deploy a custom model to a batch endpoint
- Invoke and troubleshoot batch endpoints
- Exercise - Deploy an MLflow model to a batch endpoint
- Knowledge check
- Summary