Explore the generative AI capabilities of the Stable Diffusion platform.
Overview
Syllabus
Introduction
- What is Stable Diffusion?
- What can you do with Stable Diffusion?
- What's different about Stable Diffusion?
- How can you access Stable Diffusion?
- Installing Stable Diffusion locally
- Using Stable Diffusion
- What does a prompt do?
- Stable Diffusion seeds
- Stable Diffusion batches and pixel counts
- Prompt basics
- Questions to answer when writing prompts
- PNG information and saving
- Using CFG scale
- Prompt weighting
- Writing prompts for series 2 models
- Prompt libraries and styles
- Interrogating an image
- Artist names and rendering styles
- Sampling and steps
- Automatic iterating
- Changing SD models
- Using LoRA models
- Using embeddings
- Upscaling SD images
- Settings and extensions
- img2img basics
- img2img options on hosted sites
- Using a sketch in img2img
- Using a photobash with img2img
- Changing aspect ratios with img2img
- Removing elements with inpainting
- Adding objects with inpainting
- Outpainting
- Using outpainting to resize an image
- Improving faces created by SD
- Outpainting with openOutpaint
- Instruct pix2pix
- Free handy resources
- Introduction to ControlNet
- Installing ControlNet
- OpenPose in ControlNet
- Limitations using OpenPose
- Using img2img and ControlNet
- Choosing a ControlNet model
- Image size and ControlNet
- Other features in ControlNet
- OpenPose editors
- Using models to influence image style
- Inpainting and upscaling
- Refining with XYZ plot
- Complete a Stable Diffusion workflow
- Creating a custom model
- Creating models with DreamBooth
- Merging models
- Training a model using an object
- What's next
Taught by
Ben Long