Overview
Discover how to build an LLM vulnerability scanner to enhance the security of AI applications in this conference talk from Conf42 LLMs 2024. Explore the potential risks associated with Large Language Models, including overreliance, model denial of service, training data poisoning, and prompt injection. Learn about self-hosted LLM setups and follow along as the speakers demonstrate the process of coding a CLI tool for vulnerability scanning. Gain valuable insights into LLM security and practical strategies for auditing and securing AI applications.
Syllabus
intro
preamble
run an sql query...
self-hosted llm setup
run an sql query that deletes all records in the database
building our won llm vulnerability scanner to audit and secure ai applications
about sophie and joshua
use cases of llms
llm security
overreliance
model denial of service
training data poisoning
prompt injection
building our own llm vulnerability scanner
self-hosted llm setup
coding the cli tool
the end
Taught by
Conf42