Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Prompt Injection and Jailbreaking Techniques for Banking LLM Agents - Security Demonstration

Donato Capitella via YouTube

Overview

Learn how to identify and exploit vulnerabilities in AI systems through a detailed walkthrough of a LLM jailbreak/prompt injection challenge from BSides London 2023's CTF competition. Explore real-world security implications as the video demonstrates compromising a banking AI agent built with OpenAI's GPT-4 and Langchain, revealing methods to extract confidential information through prompt manipulation. Dive into advanced exploitation techniques, including an unsolved challenge component involving SQL injection exploitation through AI agent manipulation. Reference the Damn Vulnerable LLM Agent project and Synthetic Recollections publication to understand the technical framework and research behind these security vulnerabilities.

Syllabus

Prompt Injection / JailBreaking a Banking LLM Agent (GPT-4, Langchain)

Taught by

Donato Capitella

Reviews

Start your review of Prompt Injection and Jailbreaking Techniques for Banking LLM Agents - Security Demonstration

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.