Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Santa Fe Institute

Language Models Could Learn Semantics - No Matter How You Define It

Santa Fe Institute via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a thought-provoking talk by Tal Linzen on the potential of language models to learn semantics, regardless of how it's defined. Delve into the concept of learning meaning from form, examining an ideal language model and entailment semantics. Investigate Gricean speakers and analyze assumptions needed to prove the theorem. Discover experiments in toy settings and the MNLI experiment, while considering limitations such as the no-redundancy assumption. Evaluate how close practical language models can get to the ideal and discuss the ability of language models to refer. Gain valuable insights into the intersection of linguistics, semantics, and artificial intelligence in this engaging 26-minute presentation from the Santa Fe Institute.

Syllabus

Intro
What I do
Learning meaning from form
Overview
An ideal language mode
Entailment semantics
Gricean speakers
Example
Assumptions we need to prove this theorem
Experiment in toy settings
Experiment: MNLI
Limitation 1: the no-redundancy assumption is too strong
How close can we get to the ideal language model in practice?
Interim discussion
Back to reference
Can language models refer
Conclusions

Taught by

Santa Fe Institute

Reviews

Start your review of Language Models Could Learn Semantics - No Matter How You Define It

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.