Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Vectoring Into The Future: AWS Empowered RAG Systems for LLMs

Conf42 via YouTube

Overview

Explore the future of AWS-empowered RAG systems for Large Language Models in this conference talk from Conf42 LLMs 2024. Dive into the world of foundation models, generative AI use cases, and AWS's extensive generative AI capabilities. Discover the limitations of LLMs and learn about vector embeddings and databases. Gain insights into enabling vector search across AWS services, including Amazon Aurora, OpenSearch, DocumentDB, MemoryDB, and Neptune Analytics. Understand the power of Amazon Bedrock, its knowledge bases, and vector databases. Witness a live demonstration of the Retrieve and Generate API, showcasing practical applications of these cutting-edge technologies in action.

Syllabus

intro
preamble
agenda
why foundation models?
generative ai can be used for a wide range of use cases
aws offers a broad choice of generative ai capabilities
limitations of llms
vector embeddings
vector databases
enabling vector search across aws services
amazon autota with postgresql compatibility
using pgvector in aws
amazon opensearch service
using opensearch in aws
amazon documentdb
amazon memorydb
amazon neptune analytics
amazon bedrock
knowledge bases for amazon bedrock
vector databases for amazon bedrock
retrieve and generate api
demo time

Taught by

Conf42

Reviews

Start your review of Vectoring Into The Future: AWS Empowered RAG Systems for LLMs

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.