Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Linux Foundation

Securing ML Workloads with Kubeflow and MLOps - Pwned By Statistics

Linux Foundation via YouTube

Overview

Explore the intersection of machine learning security and MLOps in this 51-minute conference talk. Delve into the challenges of ML implementation and learn how Kubeflow and MLOps practices can enhance the security of your machine learning workloads. Discover various ML models, including Circle Detector and Wolf vs Husky Detector, and examine potential flaws in federated learning. Gain insights into building secure pipelines, extracting models, and understanding different types of attacks such as distillation, model extraction, and hidden data attacks. Investigate techniques for secret memorization, leakage detection, and implementing differential privacy. Analyze the importance of threat modeling in ML systems and explore concepts like AutoML, AI models, and data drift. Conclude with a comprehensive summary and engage in a Q&A session to deepen your understanding of securing ML workloads through Kubeflow and MLOps strategies.

Syllabus

Introduction
Why ML
Why ML is hard
MLOps
Circle Detector
Wolf vs Husky Detector
Flaws in Federated Learning
Additional Techniques
Building a Pipeline
Extracting Your Model
Distillation Attack
Model Extraction Attack
Hidden Data Attack
Secret Memorization
Leakage Detection
Summary
Questions
AutoML
AI Models
Data Drift
Attack Systems
Differential Privacy
Threat Modeling
ML Ops
Outro

Taught by

Linux Foundation

Reviews

Start your review of Securing ML Workloads with Kubeflow and MLOps - Pwned By Statistics

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.