Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Stanford University

Me-LLaMA - Medical Foundation Large Language Models for Comprehensive Text Analysis and Beyond

Stanford University via YouTube

Overview

Watch a comprehensive research presentation exploring Me-LLaMA, a suite of medical foundation large language models developed for enhanced medical text analysis. Learn how these models were trained on 129B pre-training tokens and 214K instruction tuning samples from diverse biomedical and clinical sources, requiring over 100,000 A100 GPU hours for the 70B models. Discover how Me-LLaMA outperforms existing open-source medical LLMs in both zero-shot and supervised learning settings across six medical text analysis tasks and 12 benchmark datasets. Understand how task-specific instruction tuning enables Me-LLaMA to surpass commercial LLMs like ChatGPT and GPT-4 on multiple datasets. Presented by Dr. Qianqian Xie, an Associate Research Scientist at Yale University's Department of Biomedical Informatics and Data Science, whose research focuses on natural language processing applications in medicine and has been published in leading conferences and journals.

Syllabus

MedAI #130: Me-LLaMA: Medical Foundation LLMs for Text Analysis and Beyond | Qianqian Xie

Taught by

Stanford MedAI

Reviews

Start your review of Me-LLaMA - Medical Foundation Large Language Models for Comprehensive Text Analysis and Beyond

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.