Overview
Explore the transformative role of Large Language Models (LLMs) in multimodality through this 38-minute conference talk by Alex Coqueiro from AWS. Gain insights into the contextual foundations and significance of multimodality, covering various data modalities and multimodal tasks. Discover cutting-edge multimodal systems, with a focus on Latent Diffusion Models (LDM) technologies using PyTorch, Langchain, Stable Diffusion, and LLaVA. Examine practical examples demonstrating the integration of multimodality techniques with LLaMa 2, Falcon, and SDXL, showcasing their impact on shaping the multimodal landscape.
Syllabus
Unraveling Multimodality with Large Language Models - Alex Coqueiro, AWS
Taught by
Linux Foundation