Overview
Learn about critical security challenges and defensive strategies for open-source Large Language Models (LLMs) in this technical conference talk from JP Morgan Chase's Padmajeet Mhaske. Explore how popular open-source LLMs like GPT, BERT, and T5 face vulnerabilities including model inversion attacks, data poisoning, insecure deployment practices, and adversarial examples. Examine the inherent security risks that come with transparent model architectures, including potential exposure of sensitive data and proprietary information extraction through model inversion. Discover essential defensive measures including differential privacy implementation, adversarial training techniques, and robust data validation protocols. Master security best practices such as penetration testing and real-time monitoring systems while understanding the importance of building security-aware communities around open-source LLM development. Gain practical insights for strengthening LLM security to ensure safe deployment and maintain trust in AI applications.
Syllabus
Adversarial Resilience in Open-Source LLMs: A Comprehensive Approach to Securit...- Padmajeet Mhaske
Taught by
OpenSSF