Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Multi-Chain Prompt Injection and Jailbreaking in LLM Applications - Security Testing and Defense Strategies

Donato Capitella via YouTube

Overview

Learn about multi-chain prompt injection attacks targeting modern LLM application workflows in this technical presentation that explores vulnerabilities in systems using multiple LLM chains. Discover how traditional testing methods for jailbreak and prompt injection fall short when dealing with complex scenarios involving query rewriting, plugin interactions, and formatted outputs. Through detailed examples including a Workout Planner application, examine how attackers can exploit chain interactions to bypass security measures and propagate malicious prompts. Explore practical demonstrations comparing traditional jailbreaking tools like garak against multi-chain applications, analyze specific injection payload techniques, and understand potential mitigation strategies. Gain hands-on experience through the MyLLMDoc Challenge while accessing additional resources and documentation for implementing defensive measures in LLM applications.

Syllabus

- Introduction
- TL;DR
- Jailbreak vs Prompt Injection
- Workout Planner Sample Application
- Multi-Chain LLM Applications
- Traditional Jailbreaking Failing
- garak vs Multi-Chain LLM Application
- Multi-Chain Prompt Injection Payloads
- MyLLMDoc Challenge
- Mitigations and Defence
- Conclusion and Further Resources

Taught by

Donato Capitella

Reviews

Start your review of Multi-Chain Prompt Injection and Jailbreaking in LLM Applications - Security Testing and Defense Strategies

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.