Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
This course will teach you how to manage the end-to-end lifecycle of your machine learning models using the MLflow managed service on Databricks.
The machine learning workflow involves many intricate steps to ensure that the model that you deploy to production is meaningful and robust. Managing this workflow manually is hard which is why the MLflow service which manages the integrated machine learning workflow end-to-end is game changing. Databricks makes this even easier by offering a managed version of this service that is simple, intuitive, and easy to use. In this course, Managing Models Using MLflow on Databricks, you will learn to create an MLflow experiment and use it to track the runs of your models. First, you will see how you can use explicit logging to log model-related metrics and parameters and view, sort, and compare runs in an experiment. Next, you will then see how you can use autologging to track all relevant parameter, metrics, and artifacts without you having to explicitly write logging code. Then, you will see how you can use MLflow to productionize and serve your models, and register your models in the model registry and perform batch inference using your model. After that, you will learn how to transition your model through lifecycle stages such as Staging, Production, and Archived. Finally, you will see how you can work with custom models in MLflow. You will also learn how to package your model in a reusable format as an MLflow project and run training using that project hosted on Github or on the Databricks file system. When you are finished with this course, you will have the skills and knowledge to use MLflow on Databricks to manage the entire lifecycle of your machine learning model.
The machine learning workflow involves many intricate steps to ensure that the model that you deploy to production is meaningful and robust. Managing this workflow manually is hard which is why the MLflow service which manages the integrated machine learning workflow end-to-end is game changing. Databricks makes this even easier by offering a managed version of this service that is simple, intuitive, and easy to use. In this course, Managing Models Using MLflow on Databricks, you will learn to create an MLflow experiment and use it to track the runs of your models. First, you will see how you can use explicit logging to log model-related metrics and parameters and view, sort, and compare runs in an experiment. Next, you will then see how you can use autologging to track all relevant parameter, metrics, and artifacts without you having to explicitly write logging code. Then, you will see how you can use MLflow to productionize and serve your models, and register your models in the model registry and perform batch inference using your model. After that, you will learn how to transition your model through lifecycle stages such as Staging, Production, and Archived. Finally, you will see how you can work with custom models in MLflow. You will also learn how to package your model in a reusable format as an MLflow project and run training using that project hosted on Github or on the Databricks file system. When you are finished with this course, you will have the skills and knowledge to use MLflow on Databricks to manage the entire lifecycle of your machine learning model.