Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

LLM Security: Practical Protection for AI Developers

Databricks via YouTube

Overview

Explore practical strategies for securing Large Language Models (LLMs) in AI development during this 29-minute conference talk. Delve into the security risks associated with utilizing open-source LLMs, particularly when handling proprietary data through fine-tuning or retrieval-augmented generation (RAG). Examine real-world examples of top LLM security risks and learn about emerging standards from OWASP, NIST, and MITRE. Discover how a validation framework can empower developers to innovate while safeguarding against indirect prompt injection, prompt extraction, data poisoning, and supply chain risks. Gain insights from Yaron Singer, CEO & Co-Founder of Robust Intelligence, on deploying LLMs securely without hindering innovation.

Syllabus

LLM Security: Practical Protection for AI Developers

Taught by

Databricks

Reviews

Start your review of LLM Security: Practical Protection for AI Developers

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.