Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

IBM

Generative AI Advance Fine-Tuning for LLMs

IBM via Coursera

Overview

Fine-tuning a large language model (LLM) is crucial for aligning it with specific business needs, enhancing accuracy, and optimizing its performance. In turn, this gives businesses precise, actionable insights that drive efficiency and innovation. This course gives aspiring gen AI engineers valuable fine-tuning skills employers are actively seeking. During this course, you’ll explore different approaches to fine-tuning and causal LLMs with human feedback and direct preference. You’ll look at LLMs as policies for probability distributions for generating responses and the concepts of instruction-tuning with Hugging Face. You’ll learn to calculate rewards using human feedback and reward modeling with Hugging Face. Plus, you’ll explore reinforcement learning from human feedback (RLHF), proximal policy optimization (PPO) and PPO Trainer, and optimal solutions for direct preference optimization (DPO) problems. As you learn, you’ll get valuable hands-on experience in online labs where you’ll work on reward modeling, PPO, and DPO. If you’re looking to add in-demand capabilities in fine-tuning LLMs to your resume, ENROLL TODAY and build the job-ready skills employers are looking for in just two weeks!

Syllabus

  • Different Approaches to Fine-Tuning
    • In this module, you’ll begin by defining instruction-tuning and its process. You’ll also gain insights into loading a dataset, generating text pipelines, and training arguments. Further, you’ll delve into reward modeling, where you’ll preprocess the dataset and apply low-rank adaptation (LoRA) configuration. You’ll also learn to quantify quality responses, guide model optimization, and incorporate reward preferences. You’ll also describe reward trainer, an advanced training technique to train a model, and reward model loss using Hugging Face. The labs, in this module will allow practice on instruction-tuning and reward models.
  • Fine-Tuning Causal LLMs with Human Feedback and Direct Preference
    • In this module, you’ll describe the applications of large language models (LLMs) to generate policies and probabilities for generating responses based on the input text. You’ll also gain insights into the relationship between the policy and the language model as a function of omega to generate possible responses. Further, this module will demonstrate how to calculate rewards using human feedback incorporating reward function, train response samples, and evaluate agent’s performance. You’ll also define the scoring function for sentiment analysis using PPO with Hugging Face. You’ll also explain the PPO configuration class for specific models and learning rate for PPO training and how the PPO trainer processes the query samples to optimize the chatbot’s policies to get high-quality responses. This module delves into direct preference optimization (DPO) concepts to provide optimal solutions for the generated queries based on human preferences more directly and efficiently using Hugging Face. The labs in this module provide hands-on practice on human feedback and DPO.

Taught by

Joseph Santarcangelo, Ashutosh Sagar, Wojciech 'Victor' Fulmyk, and Fateme Akbari

Reviews

Start your review of Generative AI Advance Fine-Tuning for LLMs

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.