Overview
Explore a 15-minute conference presentation from SIGIR 2024 that delves into the evaluation methodologies for Generated Text Search Engine Results Pages (SERPs). Learn from a collaborative research effort by prominent scholars including Lukas Gienapp, Harrisen Scells, and others as they examine the intersection of Large Language Models (LLMs) and search result evaluation. Gain insights into novel approaches for assessing AI-generated search results and understand the implications for modern information retrieval systems presented by researchers from various institutions in partnership with the Association for Computing Machinery (ACM).
Syllabus
SIGIR 2024 W1.3 [pp] On the Evaluation of Generated Text SERPs
Taught by
Association for Computing Machinery (ACM)