Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Pre-training Mixtral MoE Model with SageMaker HyperPod - Fine-Tuning and Continued Pre-Training

Generative AI on AWS via YouTube

Overview

Learn how to pre-train, fine-tune, and continue pre-training the Mixtral Mixture of Experts (MoE) model using AWS SageMaker HyperPod and SLURM in this comprehensive webinar series. Begin with an introduction to the Mixtral MoE model architecture and SLURM overview presented by AWS experts Chris Fregly and Antje Barth. Dive deep into training the Mixtral MoE foundation model on SLURM with SageMaker HyperPod through a detailed walkthrough by AWS Applied Scientist Ben Snyder. Explore advanced techniques for instruction fine-tuning and continued pre-training with practical demonstrations from Antje Barth and Chris Fregly. Access additional resources including the O'Reilly book on Generative AI on AWS, related GitHub repositories, and community platforms to further enhance your understanding of implementing large language models on AWS infrastructure.

Syllabus

Pre-train Mixtral MoE model on SageMaker HyperPod + SLURM + Fine-Tuning + Continued Pre-Training

Taught by

Generative AI on AWS

Reviews

Start your review of Pre-training Mixtral MoE Model with SageMaker HyperPod - Fine-Tuning and Continued Pre-Training

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.