Overview
Syllabus
Introduction
List of SD3 training repositories
Method of approach
kohya sd-scripts environment setup
.toml file setup
SDPA
Multiresolution noise
Timesteps
.toml miscellaneous
Creating the meta_cap.json
sd-scripts sd3 parameters
sd3 pretrained model path
kohya sd3 readme
sd3 sampler settings
sd3 SDPA
Prodigy settings
Dependency issues
Actually running the training
How to run sd3 workflow/test model
kohya sd3 commit hash
Now what?
SD3 AdamW8Bit
wandb proof
Is it over?
Hindsight training appendix
Upper bound of sd3 LR 1.5e-3 for kohya exploding gradient
1.5e-4
SimpleTuner quickstart
SimpleTuner environment setup
Setting up CLI logins
SD3 environment overview
Dataset settings overview
Dataset settings hands-on
multidatabackend.json
SimpleTuner documentation
sdxl_env.sh
Model name
Remaining settings
train_sdxl.sh
Diffusers vs. Checkpoints
Symlinking models
ComfyUI UNET loader
Initial explorations overfitting?
Environment art overfitting?
Character Art Overfitting evaluation
Trying short prompts
ODE samplers
Testing other prompts
How to generate qualitative grids
Generating grids through API workflow
8e-6
Analyzing wandb
Higher LR 1.5e-5
Ablation study #1
Ablation study #2
Ablation study #3
SimpleTuner LoRA setup
Adding lora_rank/lora_alpha to accelerate launch
LoRA failed qualitative grids LoRA rank/alpha = 16
Exploding gradient LR = 1.5e-3
LR = 4e-4 #1
LR = 4e-4 #2
LR = 6.5e-4
Finetune vs. LoRA #1
Finetune vs. LoRA #2
Finetune vs. LoRA #2
Finetune vs. LoRA environment
Conclusion
Taught by
kasukanra