Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Jailbreaking ChatGPT-Style Sandboxes - A Guide to Linguistic Hacks and Prompt Engineering

CryptoCat via YouTube

Overview

Explore a 41-minute tutorial demonstrating how to exploit Large Language Model (LLM) vulnerabilities through linguistic hacks and prompt manipulation. Progress through nine levels of doublespeak.chat challenges while learning about prompt leakage, prompt injection, and techniques for circumventing LLM contextual sandboxing. Master the art of discovering hidden secrets by either playing within or manipulating conversation parameters to guide LLM behavior beyond intended boundaries. Access comprehensive resources including GitHub repositories, documentation for various security tools, and detailed write-ups suitable for beginners in cybersecurity and penetration testing. Follow along with clearly marked chapters covering fundamental concepts like jail-breaking LLM sandboxes, reverse prompt engineering techniques, and practical challenge solutions.

Syllabus

Start:
Jail-breaking LLM Sandboxes:
Prompt Leak/Injection:
Reverse Prompt Engineering Techniques:
Forces Unseen: Doublespeak:
Level 1:
Level 2:
Level 3:
Level 4:
Level 5:
Level 6:
Level 7:
Level 8:
Level 9:
End:

Taught by

CryptoCat

Reviews

Start your review of Jailbreaking ChatGPT-Style Sandboxes - A Guide to Linguistic Hacks and Prompt Engineering

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.