Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Linux Foundation

MLSecOps - Automated Online and Offline ML Model Evaluations on Kubernetes

Linux Foundation via YouTube

Overview

Explore MLSecOps and automated ML model evaluations on Kubernetes in this conference talk. Delve into the intersection of machine learning, DevOps, infrastructure, and security, understanding the importance of robust MLSecOps infrastructure to prevent data loss through model reversal. Learn how to overcome the complexities of monitoring model security on Kubernetes at scale by implementing automated online real-time evaluations and detailed offline analysis. Discover the use of KServe, Knative, Apache Kafka, and Trusted-AI tools for serving ML models, persisting payloads, and automating evaluations in production environments. Gain insights into real-time model explanations, fairness detection, and adversarial detection techniques to visualize and report potential security threats over time.

Syllabus

Introduction
Power of Choice
Security in AI
Demo
ML Pipelines
ML Pipeline Metrics
CaseUp
Offline ML Evaluation
Online ML Evaluation
Case Service
Predictors
Fairness Detections
Loggers
Data ingestion
Demonstration
Trust AI
Istio

Taught by

Linux Foundation

Reviews

Start your review of MLSecOps - Automated Online and Offline ML Model Evaluations on Kubernetes

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.