Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Stable Diffusion 3 2B Medium Training with Kohya and SimpleTuner - Full Finetune and LoRA

kasukanra via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Dive into an extensive 80-minute tutorial on training Stable Diffusion 3 2B Medium using kohya and SimpleTuner for full finetune and LoRA. Follow along as the process of art style training is documented, including experiments, mistakes, and analysis of results. Learn about environment setup, parameter configuration, and various training approaches. Explore topics such as SDPA, multiresolution noise, timesteps, and prodigy settings. Gain insights into troubleshooting dependency issues, running workflows, and testing models. Compare different learning rates, analyze results using Weights & Biases, and understand the differences between finetuning and LoRA. Benefit from practical tools, theoretical discussions, and real-world examples to enhance your understanding of SD3 training for art styles, concepts, and subjects.

Syllabus

Introduction
List of SD3 training repositories
Method of approach
kohya sd-scripts environment setup
.toml file setup
SDPA
Multiresolution noise
Timesteps
.toml miscellaneous
Creating the meta_cap.json
sd-scripts sd3 parameters
sd3 pretrained model path
kohya sd3 readme
sd3 sampler settings
sd3 SDPA
Prodigy settings
Dependency issues
Actually running the training
How to run sd3 workflow/test model
kohya sd3 commit hash
Now what?
SD3 AdamW8Bit
wandb proof
Is it over?
Hindsight training appendix
Upper bound of sd3 LR 1.5e-3 for kohya exploding gradient
1.5e-4
SimpleTuner quickstart
SimpleTuner environment setup
Setting up CLI logins
SD3 environment overview
Dataset settings overview
Dataset settings hands-on
multidatabackend.json
SimpleTuner documentation
sdxl_env.sh
Model name
Remaining settings
train_sdxl.sh
Diffusers vs. Checkpoints
Symlinking models
ComfyUI UNET loader
Initial explorations overfitting?
Environment art overfitting?
Character Art Overfitting evaluation
Trying short prompts
ODE samplers
Testing other prompts
How to generate qualitative grids
Generating grids through API workflow
8e-6
Analyzing wandb
Higher LR 1.5e-5
Ablation study #1
Ablation study #2
Ablation study #3
SimpleTuner LoRA setup
Adding lora_rank/lora_alpha to accelerate launch
LoRA failed qualitative grids LoRA rank/alpha = 16
Exploding gradient LR = 1.5e-3
LR = 4e-4 #1
LR = 4e-4 #2
LR = 6.5e-4
Finetune vs. LoRA #1
Finetune vs. LoRA #2
Finetune vs. LoRA #2
Finetune vs. LoRA environment
Conclusion

Taught by

kasukanra

Reviews

Start your review of Stable Diffusion 3 2B Medium Training with Kohya and SimpleTuner - Full Finetune and LoRA

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.