Unlocking Product Search with Machine Learning and LLM Innovations
OpenSource Connections via YouTube
Overview
Explore cutting-edge Machine Learning and Large Language Model innovations for optimizing product search in this 47-minute conference talk from Haystack EU 2024. Delve into various ML integrations designed to enhance user query outcomes in retail scenarios, featuring a live demonstration. Learn about advanced techniques including query understanding and rewriting with LLMs, document enrichment, sparse, dense, and hybrid retrievers, and contextual re-ranking of results. Discover how LLM agents can dynamically select the most suitable retriever for each query, with an LLM acting as a proxy evaluator to provide feedback on results at every iteration. Understand how this innovative approach improves overall retrieval quality without compromising search latency by utilizing semantic cache capabilities to reduce LLM calls. Examine embedding retrieval and vector search, discussing how approximate vector search can impact accuracy and why simpler retrieval methods may be preferable. Presented by Hajer Bouafif, a solutions architect in Data Analytics and search, and Praveen Mohan Prasad, a search specialist with data science expertise, both from OpenSource Connections.
Syllabus
Haystack EU 2024- Hajer Bouafif&Praveen Mohan Prasad:Unlock Product Search with ML & LLM Innovations
Taught by
OpenSource Connections