Explore NAVER's journey in accelerating AI application development through BentoML in this insightful conference talk. Discover how the leading South Korean tech company streamlined its production process, from model creation to application deployment. Learn about BentoML's comprehensive approach to the AI application lifecycle, covering building, shipping, and scaling. Understand how NAVER's data scientists leverage BentoML's unified model serving API to package models in a framework-agnostic manner while incorporating data processing and business logic. Gain insights into how BentoML's high-performance parallel data processing and stable model services have become integral to NAVER's search engine. Witness a real-world case study from the search industry and uncover strategies to optimize your own AI engineering workflow, enabling faster delivery of high-performing AI products.
Accelerating AI Application - NAVER's Journey from Model to Application
CNCF [Cloud Native Computing Foundation] via YouTube
Overview
Syllabus
Accelerating AI Application: NAVER's Journey from Model to Application - Eric Liu, BentoML
Taught by
CNCF [Cloud Native Computing Foundation]