Overview
Explore groundbreaking research from Harvard University in this 27-minute video presentation on enhancing In-Context Learning (ICL) for Large Language Models and improving RAG system augmentation. Dive deep into transformer learning procedures to optimize AI learning behavior without requiring expensive fine-tuning or pre-training processes. Learn from a collaborative research effort between Harvard's CBS-NTT Program in Physics of Intelligence, Department of Physics, Physics & Informatics Lab at NTT Research Inc., School of Engineering and Applied Sciences (SEAS), and the University of Michigan's Computer Science and Engineering department. Gain insights into the latest developments in representation learning, transformer optimization, and advanced AI reasoning techniques presented by researchers Francisco Park, Andrew Lee, Ekdeep Singh Lubana, Yongyi Yang, Maya Okawa, Kento Nishi, Martin Wattenberg, and Hidenori Tanaka.
Syllabus
NEW: Better In-Context Learning ICL, Improved RAG (Harvard)
Taught by
Discover AI