How Geometric Should Our Semantic Models Be?
Center for Language & Speech Processing(CLSP), JHU via YouTube
Overview
Explore the intersection of vector space models and semantic representation in this thought-provoking lecture by Katrin Erk from the University of Texas. Delve into the advantages of vector space models for representing word meanings through contextual observations, and examine their potential for extending to sentence-level representations. Investigate the technical challenges and limitations of purely geometric approaches, and consider the alternative of combining logical forms with vector space flexibility. Learn about Erk's research on computational models for word meaning and automatic lexical information acquisition from text corpora. Gain insights into the ongoing debate about the optimal balance between geometric and logical approaches in semantic modeling.
Syllabus
How Geometric Should Our Semantic Models Be? – Katrin Erk (University of Texas)
Taught by
Center for Language & Speech Processing(CLSP), JHU