Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a 44-minute DEF CON 31 conference talk that examines the groundbreaking integration of Large Language Models (LLMs) into fuzz testing methodologies. Discover how LLMs are transforming security assessment strategies, despite their limitations in general code writing. Learn about the fundamentals of LLMs, including their functionality, potential applications for hackers, prompt engineering techniques, and current model limitations. Gain insights into fuzzing principles, objectives, and challenges, with a specific focus on Python implementation. Get introduced to FuzzForest, an open-source tool that leverages LLMs to automate the writing, fixing, and triaging of fuzz tests for Python code. Examine real-world results from testing the tool on popular open-source Python libraries, which revealed numerous bugs. Consider the future implications of AI in security testing through an analysis of efficacy and potential industry impact. While basic knowledge of fuzz testing, programming, and AI concepts is beneficial, the talk includes introductory material to accommodate various expertise levels.
Syllabus
DEF CON 31 - LLMs at the Forefront Pioneering the Future of Fuzz Testing - X
Taught by
DEFCONConference