Jailbreaking ChatGPT-Style Sandboxes - A Guide to Linguistic Hacks and Prompt Engineering

Jailbreaking ChatGPT-Style Sandboxes - A Guide to Linguistic Hacks and Prompt Engineering

CryptoCat via YouTube Direct link

Level 5:

10 of 15

10 of 15

Level 5:

Class Central Classrooms beta

YouTube videos curated by Class Central.

Classroom Contents

Jailbreaking ChatGPT-Style Sandboxes - A Guide to Linguistic Hacks and Prompt Engineering

Automatically move to the next video in the Classroom when playback concludes

  1. 1 Start:
  2. 2 Jail-breaking LLM Sandboxes:
  3. 3 Prompt Leak/Injection:
  4. 4 Reverse Prompt Engineering Techniques:
  5. 5 Forces Unseen: Doublespeak:
  6. 6 Level 1:
  7. 7 Level 2:
  8. 8 Level 3:
  9. 9 Level 4:
  10. 10 Level 5:
  11. 11 Level 6:
  12. 12 Level 7:
  13. 13 Level 8:
  14. 14 Level 9:
  15. 15 End:

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.