Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

CNCF [Cloud Native Computing Foundation]

Defending Against Adversarial Model Attacks Using Kubeflow

CNCF [Cloud Native Computing Foundation] via YouTube

Overview

Explore a conference talk on defending against adversarial model attacks using Kubeflow. Learn about the importance of AI algorithm robustness in critical domains like self-driving cars, facial recognition, and hiring. Discover how to build a pipeline resistant to adversarial attacks by leveraging Kubeflow Pipelines and integrating with LFAI Adversarial Robustness Toolbox (ART). Gain insights into testing machine learning model's adversarial robustness in production on Kubeflow Serving using Payload logging and ART. Cover topics including Trusted AI, Open Governance, Security, Toolkit, and other related projects. Conclude with a Kubeflow survey and a practical demonstration.

Syllabus

Introduction
Trusted AI
Open Governance
Security
Toolkit
Other Projects
Adversarial robustness toolbox
Kubeflow Survey
Demo

Taught by

CNCF [Cloud Native Computing Foundation]

Reviews

Start your review of Defending Against Adversarial Model Attacks Using Kubeflow

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.