Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

The Future of Natural Language Processing

HuggingFace via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the future of Natural Language Processing in this comprehensive 1-hour lecture by Thomas Wolf, Science lead at HuggingFace. Delve into transfer learning, examining open questions, current trends, limits, and future directions. Gain insights from a curated selection of late 2019/early-2020 research papers focusing on model size and computational efficiency, out-of-domain generalization, model evaluation, fine-tuning, sample efficiency, common sense, and inductive biases. Analyze the impact of increasing data and model sizes, compare in-domain vs. out-of-domain generalization, and investigate solutions to robustness issues in NLP. Discuss the rise of Natural Language Generation (NLG) and its implications for the field. Address critical questions surrounding inductive bias and common sense in AI language models. Access accompanying slides for visual support and follow HuggingFace and Thomas Wolf on Twitter for ongoing updates in the rapidly evolving world of NLP.

Syllabus

Intro
Open questions, current trends, limits
Model size and Computational efficiency
Using more and more data
Pretraining on more data
Fine-tuning on more data
More data or better models
In-domain vs. out-of-domain generalization
The limits of NLU and the rise of NLG
Solutions to the lack of robustness
Reporting and evaluation issues
The inductive bias question
The common sense question

Taught by

Hugging Face

Reviews

Start your review of The Future of Natural Language Processing

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.