Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore how Baidu leverages Knative to enhance their internal deep learning platform in this conference talk. Discover the implementation of workflow automation between training and inference services using Knative eventing, smart routing and auto-scaling with Knative serving, and training job image building with Knative build framework. Learn about the platform's evolution, including the expansion of eventing for pipeline automation, serving for improved inference services, and build for streamlined training image generation. Gain insights into Baidu's current stack, Knative components, and architectural changes that led to a 20% reduction in resource consumption. Delve into topics such as DLP on Kubernetes, serverless computing, Tektoncd-pipeline with buildkit, custom autoscale class, cold start optimization, edge computing, and CloudEvents integration.
Syllabus
Intro
Why DLP on Kubernetes
What we did
Why serverless
What' s changed in 2018/2019
Our current stack
Knative components
Our old architecture
Knative build - DLP example
Tektoncd-pipeline with buildkit
What is Knative serving
Knative serving - DLP example
Knative serving - Ingress
Knative serving -autoscale old solution
Knative serving - custom autoscale class
Knative serving - cold start
Knative with container instance - old solution
Compute and Network on Edge
Knative eventing - CloudEvents
Taught by
Linux Foundation