Completed
UI cards - selecting one to show it was used as context in response
Class Central Classrooms beta
YouTube videos curated by Class Central.
Classroom Contents
Pinecone Vercel Starter Template and RAG - Live Code Review Part 2
Automatically move to the next video in the Classroom when playback concludes
- 1 Continuing discussion around the recursive crawler
- 2 GitHub CoPilot, and the tasks it excels at
- 3 What do we do with the HTML we extract? How the seeder works
- 4 The different types of document splitters you can use
- 5 embedDocument and how it works
- 6 Why do we split documents when working with a vector database?
- 7 Problems that occur if you don’t split documents
- 8 Proper chunking improves relevance
- 9 You still need to tweak and experiment with your chunk parameters
- 10 Chunked upserts
- 11 Chat endpoint - how we use the context at runtime
- 12 Injecting context in LLMs prompts
- 13 Is there a measurable difference in where you put the context in the prompt?
- 14 Reviewing the end to end RAG workflow
- 15 LLMs conditioned us to be okay with responses taking time being pretty slow!
- 16 Cool UX anecdote around what humans consider too long
- 17 You have an opportunity to associate chunks with metadata
- 18 UI cards - selecting one to show it was used as context in response
- 19 How we make it visually clear which chunks and context were used in the LLM
- 20 Auditability and why it matters
- 21 Testing the live app
- 22 Outro chatting - Thursday AI sessions on Twitter spaces
- 23 Review GitHub project - this is all open-source!
- 24 Inaugural stream conclusion
- 25 Vim / VsCode / Cursor AI IDE discussion
- 26 Setting up Devtools on Mac OSX
- 27 Upcoming stream ideas - Image search / Pokemon search