Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

LLM Performance Analysis: Context Length Failures at 2K Tokens

Discover AI via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Learn about the surprising limitations of Large Language Models (LLMs) in handling context length in this 15-minute video that reveals how two-thirds of current LLMs struggle with 2,000 token inputs as of January 2024. Explore detailed performance testing using a 741-word prompt (1,254 tokens) and discover how open-source LLMs unexpectedly outperform major commercial models. Examine the technical implications for RAG (Retrieval-Augmented Generation) systems and gain insights into the "Lost in the Middle" phenomenon, complete with benchmark data and performance comparisons across different LLM implementations.

Syllabus

LLMs FAIL at 2K context length - Yours Too?

Taught by

Discover AI

Reviews

Start your review of LLM Performance Analysis: Context Length Failures at 2K Tokens

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.