Building Machines that Discover Generalizable, Interpretable Knowledge
Paul G. Allen School via YouTube
Overview
Explore a cutting-edge lecture on program induction and its potential to revolutionize artificial intelligence. Delve into Kevin Ellis's presentation on "Building Machines that Discover Generalizable, Interpretable Knowledge," which examines how program induction systems can represent knowledge as programs and learn by synthesizing code. Discover case studies in vision, natural language, and learning-to-learn that demonstrate machines capable of acquiring new knowledge from modest experience, strongly generalizing that knowledge, representing it interpretably, and applying it to diverse problems. Learn about a novel neuro-symbolic algorithm for Bayesian program synthesis that integrates program synthesis technologies with symbolic, probabilistic, and neural AI traditions. Gain insights from Ellis, a final-year MIT graduate student, on the future of AI and its potential to mimic human-like learning and problem-solving abilities across various domains.
Syllabus
Allen School Colloquium: Kevin Ellis (MIT)
Taught by
Paul G. Allen School