Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Stanford University

Improving Large Language Models for Clinical Named Entity Recognition via Prompt Engineering

Stanford University via YouTube

Overview

Explore a 52-minute conference talk by Yan Hu from Stanford University on improving large language models for clinical named entity recognition through prompt engineering. Delve into the study's objective of quantifying GPT-3.5 and GPT-4 capabilities for clinical NER tasks and proposing task-specific prompts to enhance their performance. Learn about the evaluation of these models on two clinical NER tasks: extracting medical problems, treatments, and tests from clinical notes, and identifying nervous system disorder-related adverse events from safety reports. Discover the clinical task-specific prompt framework developed to improve GPT models' performance, including baseline prompts, annotation guideline-based prompts, error analysis-based instructions, and annotated samples for few-shot learning. Examine the results showing significant improvements in model performance when using the prompt framework, and compare the outcomes to BioClinicalBERT. Gain insights into the potential of GPT models for clinical applications and the effectiveness of incorporating medical knowledge and training samples in prompt engineering.

Syllabus

MedAI #127: Improving LLMs for Clinical Named Entity Recognition via Prompt Engineering | Yan Hu

Taught by

Stanford MedAI

Reviews

Start your review of Improving Large Language Models for Clinical Named Entity Recognition via Prompt Engineering

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.