Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Stanford University

Stanford Seminar - Training Classifiers with Natural Language Explanations

Stanford University via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!

Training accurate classifiers requires many labels, but each label provides only limited information (one bit for binary classification). In this work, we propose BabbleLabble, a framework for training classifiers in which an annotator provides a natural language explanation for each labeling decision. A semantic parser converts these explanations into programmatic labeling functions that generate noisy labels for an arbitrary amount of unlabeled data, which is used to train a classifier. On three relation extraction tasks, we find that users are able to train classifiers with comparable F1 scores from 5-100 times faster by providing explanations instead of just labels. Furthermore, given the inherent imperfection of labeling functions, we find that a simple rule-based semantic parser suffices.

The full paper can be found here: https://arxiv.org/abs/1805.03818.

Syllabus

Introduction.
Machine learning can help you!.
Traditional Labeling.
Higher Bandwidth Supervision.
Explanations Encode Labeling Heuristics.
Babble Labble Framework.
Explanations Encode Heuristics.
Predicates.
Semantic Parser I/O.
Semantic Filter.
Pragmatic Filters.
Label Aggregator.
Discriminative Classifier.
Generalization.
Datasets.
Utilizing Unlabeled Data.
Filter Bank Effectiveness.
Perfect Parsers Need Not Apply.
Limitations.
Related Work: Explanations as Features.
Related Work: Highlighting.
Summary.

Taught by

Stanford Online

Reviews

Start your review of Stanford Seminar - Training Classifiers with Natural Language Explanations

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.