Overview
Explore cutting-edge techniques in subset selection and knowledge distillation for developing cost-effective and data-efficient artificial intelligence models in this hour-long conference talk. Delve into the challenges faced by large language models (LLMs), including their unsustainable computational requirements and the impending exhaustion of global text data resources. Learn about innovative methods aimed at addressing these critical issues, focusing on strategies to enhance AI model training efficiency without compromising performance. Gain insights into the future of AI development as the speaker presents novel approaches to overcome the limitations of current LLM training practices.
Syllabus
Ping Ma: Small but Mighty: Subset Selection and Knowledge Distillation #ICBS2024
Taught by
BIMSA