Overview
Explore a Google TechTalk presented by John Thickstun on robust distortion-free watermarks for language models. Delve into a protocol for planting watermarks into text generated by autoregressive language models that remain robust to edits without altering the distribution of generated text. Learn how the watermarking process controls the source of randomness using a secret key during the language model's decoding phase. Discover the statistical correlations used for watermark detection and the provable undetectability for those without the key. Examine two alternative decoders: inverse transform sampling and Gumbel argmax sampling. Gain insights from experimental validations using OPT-1.3B, LLaMA 7B, and Alpaca 7B language models, demonstrating statistical power and robustness against paraphrasing attacks. Understand the speaker's background as a postdoctoral researcher at Stanford University, his previous work, and recognition in the field of generative models and controllability.
Syllabus
Robust Distortion-free Watermarks for Language Models
Taught by
Google TechTalks