Overview
Explore MLSecOps and automated ML model evaluations on Kubernetes in this conference talk. Delve into the intersection of machine learning, DevOps, infrastructure, and security, understanding the importance of robust MLSecOps infrastructure to prevent data loss through model reversal. Learn how to overcome the complexities of monitoring model security on Kubernetes at scale by implementing automated online real-time evaluations and detailed offline analysis. Discover the use of KServe, Knative, Apache Kafka, and Trusted-AI tools for serving ML models, persisting payloads, and automating evaluations in production environments. Gain insights into real-time model explanations, fairness detection, and adversarial detection techniques to visualize and report potential security threats over time.
Syllabus
Introduction
Power of Choice
Security in AI
Demo
ML Pipelines
ML Pipeline Metrics
CaseUp
Offline ML Evaluation
Online ML Evaluation
Case Service
Predictors
Fairness Detections
Loggers
Data ingestion
Demonstration
Trust AI
Istio
Taught by
Linux Foundation