Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

BlindAI: Secure Remote ML Inference with Intel SGX Enclaves

Confidential Computing Consortium via YouTube

Overview

Explore secure remote ML inference using Intel SGX enclaves in this 57-minute talk from the Confidential Computing Consortium. Delve into BlindAI, an open-source confidential computing solution that balances security, privacy, and performance in machine learning applications. Learn about the motivation behind BlindAI, its design considerations for Intel SGX specifics, and the results of an independent security audit. Discover how this solution protects model and user data confidentiality while ensuring prediction integrity. Examine topics such as on-device machine learning, homomorphic encryption, trusted computing bases, threat mitigation strategies, and transparency in reproducibility and auditability. Access accompanying slides and the BlindAI repository for further exploration, and join the Discord community for questions and discussions.

Syllabus

Intro
Security and ML inference
On-device Machine Learning
Homomorphic encryption
Confidential Computing
Trusted computing base
Shrink the TCB
Overview
Enclave manifest
Threat: Memory vulnerability
Defense: SGX enclave in Rust
Threat: lago attacks, Confused dep
Threat: Software side channels
Defense : Constant-time programming
Side channel mitigation for the application code Hard to enforce in all code: ⚫ Compiler are allowed to add "side channel" when optimizing
Threat: n-day attacks
Defense: Plan for the worst
Transparency: reproducibility
Transparency: optimize for auditability
How do we protect ourselves?

Taught by

Confidential Computing Consortium

Reviews

Start your review of BlindAI: Secure Remote ML Inference with Intel SGX Enclaves

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.