Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Why Larger Language Models Struggle with Subjective Reasoning

USC Information Sciences Institute via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Learn about the limitations of large language models (LLMs) in handling subjective reasoning tasks in this research seminar presented by USC PhD fellow Georgios Chochlakis. Explore how In-Context Learning (ICL) and Chain-of-Thought (CoT) prompting methods, while effective for many natural language tasks, face challenges when dealing with subjective domains like emotion and morality recognition. Discover research findings that reveal how larger model sizes can actually intensify the problem of models defaulting to prior knowledge even when presented with conflicting evidence. Examine the complications that arise from aggregating human judgments in subjective tasks and understand why annotator-level modeling may be more valuable than using aggregated datasets. Gain insights into the importance of careful consideration when deploying LLMs for tasks involving human subjectivity and varying interpretations.

Syllabus

Bigger Isn’t Always Better: Why Larger Language Models Struggle with Subjective Reasoning

Taught by

USC Information Sciences Institute

Reviews

Start your review of Why Larger Language Models Struggle with Subjective Reasoning

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.