MLOps: Logging and Loading Microsoft Phi3 Mini 128k in GGUF with MLflow
The Machine Learning Engineer via YouTube
Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Learn how to log and load a quantized LLama.cpp model in MLflow through this 18-minute tutorial video. Create a Python class, log it in MLflow, and load it for inference using a Microsoft Phi3 mini 128k model quantized with llama.cpp into int8. Follow along with the provided notebook to gain hands-on experience in implementing MLOps practices for machine learning and data science projects.
Syllabus
MLOPS MLFlow :Log and Load in MLflow Microsoft Phi3 mini 128k in GGUF #machinelearning #datascience
Taught by
The Machine Learning Engineer