Evaluating LLMs for AI Risk - Techniques for Red Teaming Generative AI

Evaluating LLMs for AI Risk - Techniques for Red Teaming Generative AI

MLOps.community via YouTube Direct link

Summary

12 of 12

12 of 12

Summary

Class Central Classrooms beta

YouTube videos curated by Class Central.

Classroom Contents

Evaluating LLMs for AI Risk - Techniques for Red Teaming Generative AI

Automatically move to the next video in the Classroom when playback concludes

  1. 1 Introduction
  2. 2 Why Red teaming
  3. 3 What is Red teaming
  4. 4 What to test
  5. 5 Failure modes
  6. 6 Automation
  7. 7 Red teaming
  8. 8 Prompt injection attack
  9. 9 Prompt extraction
  10. 10 Data transformation
  11. 11 Model alignment test
  12. 12 Summary

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.