Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Security Risks of LLM Browser Agents - Understanding Prompt Injection Vulnerabilities

Donato Capitella via YouTube

Overview

Explore a 13-minute video presentation examining security vulnerabilities in Large Language Model (LLM) browser agents, focusing on prompt injection risks detailed in WithSecure Labs' research. Learn through a practical attack demonstration showing mailbox information exfiltration, understand the operational mechanics of LLM browser agents using TaxyAI as an example, and discover key attack vectors along with their limitations. Gain valuable insights into defensive strategies, including agency limitation and detection methods, to protect against these emerging security threats. Access a comprehensive mindmap for visual reference and follow along with timestamped segments covering the complete analysis from introduction through defense mechanisms.

Syllabus

- - Introduction
- - Attack Demo Exfiltrate Information from Mailbox
- - How LLM Browser Agents Work TaxyAI Operational Loop
- - Injection Attack
- - Attack Caveats and Limitations
- - Defence Strategies Limit Agency, Detection

Taught by

Donato Capitella

Reviews

Start your review of Security Risks of LLM Browser Agents - Understanding Prompt Injection Vulnerabilities

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.