Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Fine-Tuning Code Language Models - A Practical Guide

Discover AI via YouTube

Overview

Learn how to fine-tune Code Language Models (LLMs) in this 22-minute tutorial that demonstrates practical implementation using StarCoder. Master the process of creating instruction-based fine-tuning datasets and understand the technical setup requirements including Torch, Transformers, and PEFT libraries. Explore real-world examples of code generation tasks like Python implementations for prime numbers and cosine similarity calculations. Gain insights into GPU memory requirements, environment configuration, and integration with platforms like Hugging Face and Weights & Biases. Discover the inner workings of various code generation models including Microsoft Copilot, Amazon's CodeWhisperer, GitHub's Copilot X, OpenAI's code interpreter, and Google's Palm Coder. Understand how LLMs process both human language and code through transformer architecture, including techniques like causal masking and infilling. Follow along with a hands-on Colab notebook demonstration that shows the complete fine-tuning process for Code LLMs ranging from 2B to 16B model sizes.

Syllabus

Introduction
Instruction
Free Collab Notebook
Code LLM
GPU Requirements
Other Models
Innerworkings
causal masking objective

Taught by

Discover AI

Reviews

Start your review of Fine-Tuning Code Language Models - A Practical Guide

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.