Overview
Syllabus
Intro
What is Machine Reading?
Machine Reading Question Answering Formats
Multiple-choice Question Tasks
Span Selection
Cloze Questions
What is Necessary for Machine Reading?
All Datasets Have Their Biases
A Case Study: bAbl (Weston et al. 2014)
An Examination of CNN/ Daily Mail (Chen et al. 2015)
Adversarial Examples in Machine Reading (Jia and Liang 2017)
Adversarial Creation of New Datasets? (Zellers et al. 2018)
Natural Questions Kwiatkowski et al. 2019
A Basic Model for Document Attention
A First Try: Attentive Reader
Attention-over-attention
Bidirectional Attention Flow
Word Classification vs. Span Classification
Dynamic Span Decoder (Xiong et al. 2017)
Multi-step Reasoning Datasets
Softened, and Multi-layer Memory Networks (Sukhbaatar et al. 2015) • Use standard softmax attention, and multiple layers
When to Stop Reasoning?
Coarse-to-fine Question Answering (Choi et al. 2017)
Retrieval + Language Model
Explicit Question Decomposition for Multi-step Reasoning
Question Answering with Context (Choi et al. 2018, Reddy et al. 2018)
An Aside: Traditional Computational Semantics • Reasoning is something that traditional semantic representations are really good at!
Numerical Calculation
Machine Reading with Symbolic Operations . Can we explicitly incorporate numerical reasoning in machine reading?
Solving Word Problems w/ Symbolic Reasoning • Idea: combine semantic parsing (with explicit functions) and machine reading
Taught by
Graham Neubig