Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Coursera

Intel® Solutions Pro – AI in the Cloud

Intel via Coursera

Overview

Take a deep technical dive into AI workloads in the cloud. Gain insights on many AI topics, including AI pipelines, benchmarking AI performance, instance selection for AI workloads, and Federated Learning as well as hands-on experience through online labs.

Syllabus

  • AI In the Cloud
    • This course explores the diverse ways training and inference is run on cloud instances, when it is run, and how to evaluate if an AI workload is well suited for cloud deployment. We will also explore the price-performance tradeoff of hardware and AI and how to make the right choice for deployment.
  • Choosing the Best Public Instance for your AI Workload
    • In this course, you will learn how to choose the right instance for your workload, investigate the nuances between CSPs to help you determine the best one, identify what constitutes a good versus a bad choice of an instance, and cover some of the various methods of model implementations that must be considered when choosing the best AI instance in the cloud.
  • Running AI End-to-End in the Cloud
    • This course covers what constitutes an end-to-end AI pipeline. We’ll talk about the importance of looking at an AI use case holistically; and why Intel® Xeon® processor-based cloud instances are ideal. Additionally, we’ll delve into end-to-end AI optimization strategies in detail and then examine three AI workflows implemented on AWS cloud instances and see how these optimization strategies provide a step-by-step path to performant and efficient AI in the cloud. You will also complete a lab. In the lab, you will optimize a workload on AI end-to-end and learn how using the framework accelerations, optimizing the runtime parameters, multi-instance data parallel execution, and quantization for a given workload process to increase its overall throughput and efficiency on your cloud instance.
  • Run Intel Tools in the Cloud: OpenVINO
    • This course explores how OpenVINO is used as an open-source toolkit for optimizing and deploying AI inference. We’ll walk through the three-step process of build, optimize, and deploy for your end-to-end AI solutions and how OpenVINO makes it easy for you to follow the “write once, deploy anywhere” philosophy. The course wraps up with examples and resources.
  • CSP AI: Services and Platforms
    • This course provides an overview of the key AI services and AI platforms, or “tools” offered by the three largest cloud service providers. Topics cover the major categories of AI services and tools including turnkey services as well as platform services; the importance of optimized software stacks and images, along with the impact of careful hardware or “instance selection.”
  • Intel Gaudi AI Accelerator on Amazon Web Services
    • This course provides an in-depth exploration of Intel® Gaudi® AI Accelerator's on Amazon Web Services' deep learning training product. It includes practical insights into cost comparisons that demonstrate the cost-effectiveness of the Intel® Gaudi® AI Accelerator. You'll also gain a thorough understanding of the product's scalability. In the lab, you will migrate the TensorFlow EfficientNet workload to utilize the power of the Intel® Gaudi® AI Accelerator, demonstrating how it supercharges your AI workload and significantly reduces processing time.
  • Distributed AI in the Cloud
    • This course addresses the basic idea and theory behind the different types of distributed training models and topologies associated with deep learning. The course explores their challenges and the communication overhead required for AI in the cloud. In the lab, you will configure and run a distributed training workload to increase the speed of training the AI model.
  • Federated Learning
    • This course provides an overview of federated learning, an explanation of the “data access problem,” and how federated learning can help address it. The course then delves into how sensitive and protected data can be accessed for AI applications while being respectful and compliant with current regulations; how you can characterize the data access problems that federated learning has the ability to solve and why federated learning can be a value-add to AI.
  • Run Intel Tools in the Cloud: Intel AMX & Intel AVX-512 Demonstration
    • Learn how to take advantage of hardware optimizations to get optimal AI model performance. This course provides an overview of what happens in a demonstration using Intel® Xeon® Scalable processors. You’ll gain insights about the difference between performance with and without Intel® AVX-512; and how to evaluate the performance difference between Intel® AVX-512 and VNNI, as well as the difference between Intel® AVX-512 and Intel ® AMX.

Taught by

Jennifer James

Reviews

Start your review of Intel® Solutions Pro – AI in the Cloud

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.