Watch a 27-minute conference talk from Objective-See Foundation exploring how large language models (LLMs) can be leveraged to detect malicious macOS activity through command line analysis. Learn how to enhance detection capabilities by combining endpoint telemetry from macOS systems with contextual information about endpoints and users. Discover techniques for building abstract detections that can identify typically elusive behaviors like obfuscated commands and masquerading. Presented by Kimo Bumanglag, a Member of Technical Staff at OpenAI and cyber warfare officer, alongside Joseph Millman, who works in Detections and Response at OpenAI, this talk demonstrates practical applications of prompt engineering and context enrichment for improved security detection using LLMs.
Using LLMs to Detect Malicious macOS Activity - Detecting Malware Through Context-Enhanced Analysis
Objective-See Foundation via YouTube
Overview
Syllabus
OBTS v7: A Little Less Malware, A Little More Context: Using LLMs to Detect Malicious macOS Activity
Taught by
Objective-See Foundation