Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Edit Distance Robust Watermarks - Beyond Substitution Channels in Language Models

Simons Institute via YouTube

Overview

Watch a 47-minute lecture from the Simons Institute where Noah Golowich from MIT presents groundbreaking research on watermarking language model outputs with provable guarantees. Explore innovative approaches to detecting AI-generated text through a watermarking scheme that achieves both undetectability and robustness to edits. Learn about the cryptographic concept of undetectability introduced by Christ, Gunn & Zamir, and discover how this new scheme handles adversarial insertions, substitutions, and deletions in watermarked text. Understand the technical implementation of indexing pseudorandom codes and their role in creating robust watermarks when working with large alphabets. Gain insights into the generic transformation process from codes to watermarking schemes for language models, all while using weaker computational assumptions than previous approaches. The presentation, part of the "Alignment, Trust, Watermarking, and Copyright Issues in LLMs" series, represents joint work with Ankur Moitra and offers a significant advancement in the field of AI text detection and watermarking.

Syllabus

Edit Distance Robust Watermarks: beyond substitution channels

Taught by

Simons Institute

Reviews

Start your review of Edit Distance Robust Watermarks - Beyond Substitution Channels in Language Models

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.