Overview
Learn about InCA (In-context Continual Learning LLM Assisted by an External Continual Learner), a groundbreaking 40-minute research presentation that introduces a novel approach to continuous learning in Large Language Models without traditional fine-tuning or PEFT Adapters. Explore how this innovative method leverages statistical models of semantic tags and an external module (ECL) to achieve efficient class selection and prevent catastrophic forgetting, all while eliminating the need to store previous data. Discover why optimized prompt design and formatting through the ECL method proves more effective than conventional gradient-based optimization and parameter updates. Developed by researchers from the University of Illinois Chicago, Intel Labs, and Salesforce AI Research, this presentation demonstrates an alternative to Retrieval-Augmented Generation (RAG) approaches, offering insights into the future of dynamic AI model adaptation and continuous learning capabilities.
Syllabus
Beyond RAG: New Continual Learning of LLM w/ InCA
Taught by
Discover AI