Overview
Syllabus
Intro
Overview of SDXL 1.0 and SD 1.5 models
Dataset Overview
Short explanation about my ComfyUI node setup
WAS Node Suite text concatenation
CLIP G and CLIP L
CLIPTextEncodeSDXL
Naive local finetuning with Adafactor
How to fit finetuning settings into 24 gb VRAM consumer GPU
Local finetune with Adafactor settings
Min SNR Gamma paper
Installing local Tensorboard to view event logs
Runpod overview
How much Runpod costs
Runpod finetune settings
Weights and Biases overview
Determining the initial learning rate for AdamW finetune
Adding a sample prompt to training settings to visually gauge training progress
Checking AdamW finetune sample images
Efficiency nodes for XY plot
How to retrieve your models from Runpod
Evaluating finetune XY plot
D-Adaptation overview
D-Adaptation training settings
Decoupled Weight Decay Regularization paper
What does weight decay do?
Betas and Growth Rate
drhead's choice of hyperparameters
LoRA network dimensions and alpha
Tensorboard analysis for D-Adaptation LoRA
D-Adaptation sample images analysis
Prodigy repository
Prodigy training settings
How to enable cosine annealing
Prodigy training settings version 2
Prodigy code deep dive
Why I didn't use any warmup for Prodigy training settings
Weights and Biases analysis for Prodigy
Prodigy sample images analysis
Prodigy XY plot
Prodigy AdamW and Higher Weight Decay analysis
Prodigy final version XY plot
Closing thoughts
CivitAI SDXL Competition
Taught by
kasukanra