Overview
Explore a conference talk on measuring and mitigating hallucinations in Retrieval-Augmented Generation (RAG) systems. Dive into the LLM revolution and its applications before addressing the critical issue of hallucinations. Learn about RAG as a solution, comparing DIY approaches with RAG-as-a-service options like Vectara. Discover the Hallucination Evaluation Model (HHEM) and see practical applications through sample projects like AskNews and Tax Chat. Gain insights into building more reliable AI applications that leverage the power of LLMs while minimizing false information.
Syllabus
intro
about me
the llm revolution
use cases with llms...
but... llms hallucinate
addressing hallucinations with rag
rag: do-it-yourself approach
rag-as-a-service with vectara
why retrieval augmented generation?
why vectara?
hhem: hallucination evaluation model
building application with vectara
sample app: asknews
sample app: asknews with hhem
sample app: tax chat
thank you!
Taught by
Conf42