Overview
Explore a research presentation from USENIX Security '24 that investigates the effectiveness of Large Language Models (LLMs) in code analysis tasks. Delve into a systematic evaluation conducted by researchers from the University of California, Davis and Temple University examining LLMs' capabilities in processing and analyzing programming code, with particular attention to obfuscated code scenarios. Learn about real-world case studies demonstrating practical applications of LLMs in code analysis, while understanding both their potential benefits and inherent limitations. Gain insights from this 17-minute talk that bridges the gap in existing literature by providing a comprehensive assessment of how LLMs perform in automated code analysis tasks, contributing valuable knowledge to this emerging field of study.
Syllabus
USENIX Security '24 - Large Language Models for Code Analysis: Do LLMs Really Do Their Job?
Taught by
USENIX