Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Learn about model-centric security attacks in machine learning and their mitigation strategies in this technical talk from OpenSSF. Explore common security vulnerabilities in AI/ML applications, focusing specifically on malicious code injection in pickled model files and potentially harmful code generated by Large Language Models (LLMs). Discover how Flyte, an open-source ML orchestrator, helps protect against these attack vectors through practical demonstrations and examples. Examine the limitations of Flyte's security measures and understand how to enhance protection using additional open-source tools like safetensors and ONNX. Gain valuable insights into securing AI orchestration workflows as machine learning applications continue to expand across text, image, audio, and video generation domains.
Syllabus
Secure AI Orchestration: Mitigate Model-centric Attacks with Flyte - Niels Bantilan, Union.ai
Taught by
OpenSSF