Completed
Remaining settings
Class Central Classrooms beta
YouTube videos curated by Class Central.
Classroom Contents
Stable Diffusion 3 2B Medium Training with Kohya and SimpleTuner - Full Finetune and LoRA
Automatically move to the next video in the Classroom when playback concludes
- 1 Introduction
- 2 List of SD3 training repositories
- 3 Method of approach
- 4 kohya sd-scripts environment setup
- 5 .toml file setup
- 6 SDPA
- 7 Multiresolution noise
- 8 Timesteps
- 9 .toml miscellaneous
- 10 Creating the meta_cap.json
- 11 sd-scripts sd3 parameters
- 12 sd3 pretrained model path
- 13 kohya sd3 readme
- 14 sd3 sampler settings
- 15 sd3 SDPA
- 16 Prodigy settings
- 17 Dependency issues
- 18 Actually running the training
- 19 How to run sd3 workflow/test model
- 20 kohya sd3 commit hash
- 21 Now what?
- 22 SD3 AdamW8Bit
- 23 wandb proof
- 24 Is it over?
- 25 Hindsight training appendix
- 26 Upper bound of sd3 LR 1.5e-3 for kohya exploding gradient
- 27 1.5e-4
- 28 SimpleTuner quickstart
- 29 SimpleTuner environment setup
- 30 Setting up CLI logins
- 31 SD3 environment overview
- 32 Dataset settings overview
- 33 Dataset settings hands-on
- 34 multidatabackend.json
- 35 SimpleTuner documentation
- 36 sdxl_env.sh
- 37 Model name
- 38 Remaining settings
- 39 train_sdxl.sh
- 40 Diffusers vs. Checkpoints
- 41 Symlinking models
- 42 ComfyUI UNET loader
- 43 Initial explorations overfitting?
- 44 Environment art overfitting?
- 45 Character Art Overfitting evaluation
- 46 Trying short prompts
- 47 ODE samplers
- 48 Testing other prompts
- 49 How to generate qualitative grids
- 50 Generating grids through API workflow
- 51 8e-6
- 52 Analyzing wandb
- 53 Higher LR 1.5e-5
- 54 Ablation study #1
- 55 Ablation study #2
- 56 Ablation study #3
- 57 SimpleTuner LoRA setup
- 58 Adding lora_rank/lora_alpha to accelerate launch
- 59 LoRA failed qualitative grids LoRA rank/alpha = 16
- 60 Exploding gradient LR = 1.5e-3
- 61 LR = 4e-4 #1
- 62 LR = 4e-4 #2
- 63 LR = 6.5e-4
- 64 Finetune vs. LoRA #1
- 65 Finetune vs. LoRA #2
- 66 Finetune vs. LoRA #2
- 67 Finetune vs. LoRA environment
- 68 Conclusion