Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

CNCF [Cloud Native Computing Foundation]

Fortifying AI Security in Kubernetes with Confidential Containers - CoCo

CNCF [Cloud Native Computing Foundation] via YouTube

Overview

Explore the cutting-edge approach to securing AI models in Kubernetes environments through this informative conference talk. Delve into the world of confidential computing and discover how Confidential Containers (CoCo), a CNCF sandbox project, enhances AI security. Learn about the challenges of protecting valuable AI intellectual property and how CoCo addresses these concerns by encrypting memory to safeguard data during use. Examine the integration of CoCo with the Kserve project to bolster AI model protection in Kubernetes. Gain insights into the broader applications of CoCo beyond inferencing, including its role in providing general memory protection for foundational platforms. Understand the importance of securing AI models without relying on implicit trust in third-party platform providers. This 33-minute presentation by Suraj Deshmukh from Microsoft and Pradipta Banerjee from Red Hat offers valuable knowledge for organizations seeking to fortify their AI security in cloud-native environments.

Syllabus

Fortifying AI Security in Kubernetes with Confidential Containers (CoCo)

Taught by

CNCF [Cloud Native Computing Foundation]

Reviews

Start your review of Fortifying AI Security in Kubernetes with Confidential Containers - CoCo

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.