Explore the potential risks and ethical concerns surrounding large language models in this thought-provoking conference talk from FAccT 2021. Delve into the concept of "stochastic parrots" as researchers E. Bender, T. Gebru, and A. McMillan-Major examine whether language models can become too big. Investigate the challenges of managing vast datasets, the implications for research time allocation, and the environmental impact of training extensive models. Gain insights into proposed risk mitigation strategies and consider the broader implications of AI development on society and scientific progress.
On the Dangers of Stochastic Parrots - Can Language Models Be Too Big?
Association for Computing Machinery (ACM) via YouTube
Overview
Syllabus
Intro
Risks
Unmanageable Data
Research Time
Stochastic Parrots
Risk Mitigation Strategies
Taught by
ACM FAccT Conference