Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Make LLM Apps Sane Again - Forgetting Incorrect Data in Real Time

Conf42 via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a conference talk on improving Large Language Model (LLM) applications by implementing real-time data correction. Delve into LLM limitations and various correction methods, including fine-tuning, prompt engineering, and Retrieval-Augmented Generation (RAG). Learn about vector embeddings, popular RAG use cases, and the potential risks of compromised RAG data. Discover a solution using real-time vector indexing, and follow along with a practical demonstration of building a chatbot. Gain insights on the importance of reactivity in LLM applications and walk away with key takeaways for enhancing LLM performance and reliability.

Syllabus

intro
preamble
agenda
llms
llm limitations
how to correct the model?
finetuning
prompt engineering
problems with manual prompting
rag
what are vector embeddings?
popular rag use cases
what happens if the rag data is compromised?
solution: use a real time vector index
practice: build a chatbot
pathway demo
reactivity is key
takeaways
thank you!

Taught by

Conf42

Reviews

Start your review of Make LLM Apps Sane Again - Forgetting Incorrect Data in Real Time

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.