Explore how Uber is leveraging Ray to extend its Michelangelo ML platform for end-to-end LLMOps in this 30-minute conference talk. Discover how Uber has established a scalable and interactive development environment capable of utilizing hundreds of A100 GPUs on Ray with great flexibility. Learn about the integration of various open-source techniques for LLM training, evaluation, and serving, which have significantly enhanced Uber's capability to efficiently develop custom models based on state-of-the-art LLMs like LLama2. Gain insights into how Uber is harnessing Generative AI driven by LLMs to improve user experience and employee productivity in the mobility and delivery sectors. Access the slide deck for a visual representation of the concepts discussed.
Overview
Syllabus
Enabling End-to-End LLMOps on Michelangelo with Ray
Taught by
Anyscale