Completed
Analyzing the new tuned outputs
Class Central Classrooms beta
YouTube videos curated by Class Central.
Classroom Contents
Stable Diffusion- Applying ControlNet to Character Design - Part 2
Automatically move to the next video in the Classroom when playback concludes
- 1 Intro
- 2 Download and place ControlNet 1.1 models in proper directory
- 3 Segment Anything extension
- 4 Install visual studio build tools if you have any errors regarding pycoco tools
- 5 Generate baseline reference using traditional merged inpainting model
- 6 Using Grounding DINO to create a semi-supervised inpaint mask
- 7 Enable ControlNet 1.1 inpaint global harmonious
- 8 ControlNet 1.1 inpainting gotcha #1
- 9 ControlNet 1.1 gotcha #2
- 10 Tuning the inapinting parameters
- 11 Analyzing the new tuned outputs
- 12 Compositing ControlNet 1.1 inpaint output in photoshop
- 13 ControlNet 1.1 inpaint without Grounding DINO
- 14 Exploring ControlNet 1.1 instruct pix2pix for targeted variations
- 15 Determining the limitations for ip2p
- 16 Using segment anything with ip2p
- 17 Applying ip2p + Grounding DINO to PNGtuber
- 18 Analyzing the tuned PNGtuber results
- 19 ControlNet 1.1 Tile model overview
- 20 Applying the tile model to the shipbuilder illustration
- 21 Showing the thumbnail tile model generation
- 22 Introducing the image that will be used with tile model contextual upscaling
- 23 Checking Github issue for more information regarding tile model
- 24 Contextual upscaling with ControlNet 1.1 tile model
- 25 Comparing upscaler methods tile model, vanilla Ultimate SD Upscale, 4x Ultrasharp
- 26 Use tile model upscale on the star pupils chibi
- 27 Composite upscaled closed mouth expression
- 28 Creating the closed eyes expression
- 29 Closing thoughts