Overview
Syllabus
[] Introduction to Sascha Heyer
[] Code, article, and videos
[] This episode's topics
[] Training ML Models
[] Training with Vertex AI
[] Training application
[] Why Container
[] Overall process
[] Demo
[] Vertex AI Training
[] Training Application
[] Enable monitoring for new versions of models
[] Using spot instances while kicking off the training jobs
[] Enabling Tensorflow real-time access while the job is traing
[] When to use Vertex AI vs when to use Google AI platform
[] Same components with Kubeflow
[] Control inside VPC
[] Serving ML Models
[] Different ways in Serving ML Models
[] Pre-build container for prediction
[] Custom container for prediction
[] Model serving steps
[] Upload model
[] Endpoint
[] Deploy model
[] Container requirements
[] Build customer container I
[] Build customer container II
[] Build customer container III
[] Getting predictions
[] Serving notebook demo
[] Optimizations around speeding up deployment
[] Working with Sagemaker relating to Vertex
[] Payload limitations
[] Limitations
[] Pricing
[] Machine Learning Teams don't need Kubernetes
[] Google Vertex AI Pipelines, a serverless product to run Kubeflow or TFX Pipelines
[] Vertex Pipelines and Kubeflow
[] Basic Pipeline
[] Required Modules
[] Components
[] Compiler
[] Demo
[] Component types
[] Predefined components
[] Component Specification
[] Share Components
[] Parameters
[] Model Lineage
[] Using Vertex Experiments
[] Scheduling pipelines
[] Production models trained and deploy
[] Vertex Batch prediction Service
[] Batch predictions are useful
[] Wrap up
Taught by
MLOps.community