Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Watch a 27-minute conference talk exploring the security risks and defensive strategies in machine learning supply chains. Learn about the growing reliance on public model hubs for foundation models and the vulnerability of ML model files to Model Serialization Attacks (MSA). Discover how these attacks can transform innocent models into system backdoors through malicious code injection during deserialization. Explore two key defensive strategies using open-source tools: model scanning with ModelScan by Protect AI and cryptographic signing with Sigstore by OpenSSF. Understand how model scanning provides visibility into black box model files, while cryptographic signatures establish trusted links between artifacts and their sources. See demonstrations of how these traditional software security approaches can be adapted to protect AI/ML systems from supply chain attacks and prevent Trojan horse infiltrations.