Overview
Artificial Intelligence, or AI, is transforming society in many ways.
From speech recognition to self-driving cars, to the immense possibilities offered by generative AI. AI technology provides enterprises with the compute power, tools, and algorithms their teams need to do their life’s work.
Designed for enterprise professionals, this course provides invaluable insights into the ever-changing realm of AI. Whether you're a seasoned professional or just beginning your journey into AI, this course is essential for staying ahead in today's rapidly evolving technological landscape.
We start the journey with an Introduction to AI where we cover AI basic concepts and principles.​
Then, we delve into data center and cloud infrastructure followed by AI operations.​
This course is part of the preparation material for the “NVIDIA-Certified Associate: AI Infrastructure and Operations" certification.
Successfully completing this exam will allow you to showcase your expertise and support your professional development.
Who should take this course?
* IT Professionals
* System and Network Administrators
* DevOps Engineers
* Datacenter professionals 
No prior experience required.
Let's get started!
Syllabus
- Introduction to AI | NVIDIA Training
- In this module, you will explore AI applications across various industries and delve into fundamental concepts of AI, Machine Learning (ML), and Deep Learning (DL). Additionally, the course will introduce you to Generative AI, how Large Language Models (LLMs) work and new business opportunities being unlocked with this new technology. You will understand what a GPU is, distinguish the key differences between GPUs and CPUs, and delve into the software ecosystem enabling developers to harness GPU computing for data science. Finally, you will learn considerations for deploying AI workloads across different infrastructures, from on-premises data centers to models and multi-cloud setups.
- AI Infrastructure
- In this module, we will visit infrastructure level considerations when deploying AI clusters. You will learn about requirements for multi-system AI clusters, such as the capabilities of NVIDIA GPUs and CPUs to address the requirements of AI workloads, storage, and networking considerations. We will discuss how energy efficient computing practices help data centers lower their carbon footprint, and how recommended design documents, or Reference Architectures (RAs), can be used as a foundation for building best-of-breed optimized AI systems. We will end this module discussing how cloud computing enhances AI deployments, outlining the key considerations for deploying AI in the cloud.
- AI Operations | NVIDIA Training
- This last module covers key aspects involved in infrastructure management, monitoring, cluster orchestration, and job scheduling. You will identify the general concepts about provisioning, managing, and monitoring AI infrastructure, and describe the value and tools for cluster management. Finally, you will learn the key differences and common tools used for orchestration and scheduling, and the value of MLOps tools for continuous delivery and automation of AI workloads.
- Course Completion Quiz
- It is highly recommended that you complete all the course activities before you begin the quiz. Good luck!
Taught by
NVIDIA Training
Tags
Reviews
5.0 rating, based on 1 Class Central review
4.6 rating at Coursera based on 253 ratings
Showing Class Central Sort
-
Excelente curso!!!, muy completo y de gran utilidad para el conociemiento sobre los fundamentos de operaciones e infraestructura de IA.