Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Adversarial Resilience in Open-Source LLMs: A Comprehensive Approach to Security and Robustness

OpenSSF via YouTube

Overview

Learn about critical security challenges and defensive strategies for open-source Large Language Models (LLMs) in this technical conference talk from JP Morgan Chase's Padmajeet Mhaske. Explore how popular open-source LLMs like GPT, BERT, and T5 face vulnerabilities including model inversion attacks, data poisoning, insecure deployment practices, and adversarial examples. Examine the inherent security risks that come with transparent model architectures, including potential exposure of sensitive data and proprietary information extraction through model inversion. Discover essential defensive measures including differential privacy implementation, adversarial training techniques, and robust data validation protocols. Master security best practices such as penetration testing and real-time monitoring systems while understanding the importance of building security-aware communities around open-source LLM development. Gain practical insights for strengthening LLM security to ensure safe deployment and maintain trust in AI applications.

Syllabus

Adversarial Resilience in Open-Source LLMs: A Comprehensive Approach to Securit...- Padmajeet Mhaske

Taught by

OpenSSF

Reviews

Start your review of Adversarial Resilience in Open-Source LLMs: A Comprehensive Approach to Security and Robustness

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.