Authorization Best Practices for Systems Using Large Language Models
Cloud Security Alliance via YouTube
Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore authorization best practices for systems utilizing Large Language Models in this 26-minute conference talk by the Cloud Security Alliance. Gain insights into the unique security considerations that arise with the integration of LLMs, including prompt injection attacks and vector database risks. Discover the components and design patterns involved in LLM-based systems, focusing on authorization implications specific to each element. Learn about best practices and patterns for various use cases, such as retrieval augmented generation (RAG) with vector databases, API calls to external systems, and SQL queries generated by LLMs. Delve into the fundamental concerns surrounding the development of agentic systems, equipping yourself with essential knowledge to build more robust and secure LLM-powered applications.
Syllabus
Authorization best practices for systems using Large Language Models
Taught by
Cloud Security Alliance