Overview
Explore the security risks and mitigation strategies for building Large Language Model (LLM) applications in this recorded webinar from WithSecure. Learn about the evolution of LLMs, the development of autonomous agent applications, and common misconceptions surrounding AI safety. Watch practical demonstrations of prompt injection vulnerabilities, including a browser agent demo with Taxi AI, and understand the root causes of LLM alignment issues. Compare traditional injection attacks with LLM-specific threats, and discover essential controls and defense mechanisms for protecting LLM applications. Gain valuable insights through comprehensive take-away points and a Q&A session that addresses key security concerns in LLM implementation.
Syllabus
- Where did LLMs come from?
- Building LLM applications
- LLM agents
- Misconceptions about AI safety
- Risks of LLM use-cases
- Prompt injection demo
- LLM agents
- Prompt Injection Demo in Browser Agent Taxi AI
- Root cause of LLM alignement issues
- Comparison with traditional injection attacks
- Controls and defences against prompt injection
- Take-away points
- Questions
Taught by
Donato Capitella