Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Stanford University

From Large Language Models to Large Multimodal Models - Stanford CS25 - Lecture 4

Stanford University via YouTube

Overview

Explore the evolution from large language models to large multimodal models in this Stanford University lecture. Delve into the basics of large language models and examine the academic community's efforts in developing multimodal models over the past year. Learn about CogVLM, a powerful open-source multimodal model with 17B parameters, and CogAgent, designed for GUI and OCR scenarios. Discover applications of multimodal models and potential research directions in academia. Speaker Ming Ding, a research scientist at Zhipu AI, shares insights on multimodal generative models, understanding models, and language models. Gain valuable knowledge about the integration of visual perception with language model capabilities in this 1 hour and 20 minute presentation from the Stanford CS25 Transformers United series.

Syllabus

Stanford CS25: V4 I From Large Language Models to Large Multimodal Models

Taught by

Stanford Online

Reviews

Start your review of From Large Language Models to Large Multimodal Models - Stanford CS25 - Lecture 4

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.