Completed
LoRA network dimensions and alpha
Class Central Classrooms beta
YouTube videos curated by Class Central.
Classroom Contents
Stable Diffusion- Training SDXL 1.0 - Finetune, LoRA, D-Adaptation, Prodigy
Automatically move to the next video in the Classroom when playback concludes
- 1 Intro
- 2 Overview of SDXL 1.0 and SD 1.5 models
- 3 Dataset Overview
- 4 Short explanation about my ComfyUI node setup
- 5 WAS Node Suite text concatenation
- 6 CLIP G and CLIP L
- 7 CLIPTextEncodeSDXL
- 8 Naive local finetuning with Adafactor
- 9 How to fit finetuning settings into 24 gb VRAM consumer GPU
- 10 Local finetune with Adafactor settings
- 11 Min SNR Gamma paper
- 12 Installing local Tensorboard to view event logs
- 13 Runpod overview
- 14 How much Runpod costs
- 15 Runpod finetune settings
- 16 Weights and Biases overview
- 17 Determining the initial learning rate for AdamW finetune
- 18 Adding a sample prompt to training settings to visually gauge training progress
- 19 Checking AdamW finetune sample images
- 20 Efficiency nodes for XY plot
- 21 How to retrieve your models from Runpod
- 22 Evaluating finetune XY plot
- 23 D-Adaptation overview
- 24 D-Adaptation training settings
- 25 Decoupled Weight Decay Regularization paper
- 26 What does weight decay do?
- 27 Betas and Growth Rate
- 28 drhead's choice of hyperparameters
- 29 LoRA network dimensions and alpha
- 30 Tensorboard analysis for D-Adaptation LoRA
- 31 D-Adaptation sample images analysis
- 32 Prodigy repository
- 33 Prodigy training settings
- 34 How to enable cosine annealing
- 35 Prodigy training settings version 2
- 36 Prodigy code deep dive
- 37 Why I didn't use any warmup for Prodigy training settings
- 38 Weights and Biases analysis for Prodigy
- 39 Prodigy sample images analysis
- 40 Prodigy XY plot
- 41 Prodigy AdamW and Higher Weight Decay analysis
- 42 Prodigy final version XY plot
- 43 Closing thoughts
- 44 CivitAI SDXL Competition