Overview
Explore the world of deploying AI and ML models in edge computing scenarios through this 33-minute conference talk from DevConf.CZ 2024. Compare traditional cloud-based Kubernetes distributions with lightweight alternatives like MicroShift, optimized for edge devices. Examine crucial factors such as power consumption, model size, and performance for successful edge deployments. Discover practical examples of serving multiple models and strategies to minimize inference process switching time in time-sensitive situations. Learn how open source components can help overcome challenges in running AI and ML models efficiently at the edge. Gain valuable insights for technology enthusiasts, developers, and industry professionals interested in leveraging the potential of edge computing for AI and ML applications.
Syllabus
Efficient Edge Computing: Unleashing Potential of AI/ML W/ Lightweight Kubernetes - DevConf.CZ 2024
Taught by
DevConf