Defending Against Adversarial Model Attacks Using Kubeflow
CNCF [Cloud Native Computing Foundation] via YouTube
Overview
Explore a conference talk on defending against adversarial model attacks using Kubeflow. Learn about the importance of AI algorithm robustness in critical domains like self-driving cars, facial recognition, and hiring. Discover how to build a pipeline resistant to adversarial attacks by leveraging Kubeflow Pipelines and integrating with LFAI Adversarial Robustness Toolbox (ART). Gain insights into testing machine learning model's adversarial robustness in production on Kubeflow Serving using Payload logging and ART. Cover topics including Trusted AI, Open Governance, Security, Toolkit, and other related projects. Conclude with a Kubeflow survey and a practical demonstration.
Syllabus
Introduction
Trusted AI
Open Governance
Security
Toolkit
Other Projects
Adversarial robustness toolbox
Kubeflow Survey
Demo
Taught by
CNCF [Cloud Native Computing Foundation]