Stable Diffusion 3 2B Medium Training with Kohya and SimpleTuner - Full Finetune and LoRA

Stable Diffusion 3 2B Medium Training with Kohya and SimpleTuner - Full Finetune and LoRA

kasukanra via YouTube Direct link

Dataset settings hands-on

33 of 68

33 of 68

Dataset settings hands-on

Class Central Classrooms beta

YouTube videos curated by Class Central.

Classroom Contents

Stable Diffusion 3 2B Medium Training with Kohya and SimpleTuner - Full Finetune and LoRA

Automatically move to the next video in the Classroom when playback concludes

  1. 1 Introduction
  2. 2 List of SD3 training repositories
  3. 3 Method of approach
  4. 4 kohya sd-scripts environment setup
  5. 5 .toml file setup
  6. 6 SDPA
  7. 7 Multiresolution noise
  8. 8 Timesteps
  9. 9 .toml miscellaneous
  10. 10 Creating the meta_cap.json
  11. 11 sd-scripts sd3 parameters
  12. 12 sd3 pretrained model path
  13. 13 kohya sd3 readme
  14. 14 sd3 sampler settings
  15. 15 sd3 SDPA
  16. 16 Prodigy settings
  17. 17 Dependency issues
  18. 18 Actually running the training
  19. 19 How to run sd3 workflow/test model
  20. 20 kohya sd3 commit hash
  21. 21 Now what?
  22. 22 SD3 AdamW8Bit
  23. 23 wandb proof
  24. 24 Is it over?
  25. 25 Hindsight training appendix
  26. 26 Upper bound of sd3 LR 1.5e-3 for kohya exploding gradient
  27. 27 1.5e-4
  28. 28 SimpleTuner quickstart
  29. 29 SimpleTuner environment setup
  30. 30 Setting up CLI logins
  31. 31 SD3 environment overview
  32. 32 Dataset settings overview
  33. 33 Dataset settings hands-on
  34. 34 multidatabackend.json
  35. 35 SimpleTuner documentation
  36. 36 sdxl_env.sh
  37. 37 Model name
  38. 38 Remaining settings
  39. 39 train_sdxl.sh
  40. 40 Diffusers vs. Checkpoints
  41. 41 Symlinking models
  42. 42 ComfyUI UNET loader
  43. 43 Initial explorations overfitting?
  44. 44 Environment art overfitting?
  45. 45 Character Art Overfitting evaluation
  46. 46 Trying short prompts
  47. 47 ODE samplers
  48. 48 Testing other prompts
  49. 49 How to generate qualitative grids
  50. 50 Generating grids through API workflow
  51. 51 8e-6
  52. 52 Analyzing wandb
  53. 53 Higher LR 1.5e-5
  54. 54 Ablation study #1
  55. 55 Ablation study #2
  56. 56 Ablation study #3
  57. 57 SimpleTuner LoRA setup
  58. 58 Adding lora_rank/lora_alpha to accelerate launch
  59. 59 LoRA failed qualitative grids LoRA rank/alpha = 16
  60. 60 Exploding gradient LR = 1.5e-3
  61. 61 LR = 4e-4 #1
  62. 62 LR = 4e-4 #2
  63. 63 LR = 6.5e-4
  64. 64 Finetune vs. LoRA #1
  65. 65 Finetune vs. LoRA #2
  66. 66 Finetune vs. LoRA #2
  67. 67 Finetune vs. LoRA environment
  68. 68 Conclusion

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.