Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

When MOE Meets LLMs: Parameter Efficient Fine-tuning for Multi-task Medical Applications - Lecture 1

Association for Computing Machinery (ACM) via YouTube

Overview

Explore a cutting-edge conference talk on the intersection of Mixture of Experts (MOE) and Large Language Models (LLMs) for multi-task medical applications. Delve into parameter-efficient fine-tuning techniques presented by authors Qidong Liu, Xian Wu, Xiangyu Zhao, Yuanshao Zhu, Derong Xu, Feng Tian, and Yefeng Zheng. Gain insights into how these advanced AI methodologies are being applied to improve efficiency and performance in various medical tasks. Learn about the potential impact of this research on the future of healthcare technology and AI-assisted medical decision-making.

Syllabus

SIGIR 2024 T1.2 [fp] When MOE Meets LLMs: Parameter Efficient Fine-tuning for Multi-task Medical App

Taught by

Association for Computing Machinery (ACM)

Reviews

Start your review of When MOE Meets LLMs: Parameter Efficient Fine-tuning for Multi-task Medical Applications - Lecture 1

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.