Overview
Join a detailed webinar featuring experts from NVIDIA, Intel, and Dell who explore the often-overlooked technical and infrastructure costs of implementing generative AI technologies. Delve into crucial enterprise considerations including scalability challenges, computational demands of Large Language Model inferencing, fabric requirements, and sustainability impacts from increased power consumption and cooling needs. Learn practical strategies for cost optimization by comparing on-premises versus cloud deployments, and discover how to leverage pre-trained models for specific market domains. Through comprehensive discussions on AI infrastructure trends, silicon diversity, training methodologies, and both endpoint and edge inference, gain valuable insights into managing and reducing the environmental and financial impact of AI implementations.
Syllabus
Introduction
AIs Rapid Evolution
AI Infrastructure
Trends
Power Usage
Silicon Diversity
Training
Fine Tuning
Rag
David McIntyre
Rob For Fabric
AI optimized Ethernet example
Wire it differently
Cost per bit
Summary
QA
Endpoint Inference
Edge Inference
Question of the Day
Conclusion
Taught by
SNIAVideo