Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Stable Diffusion- Applying ControlNet to Character Design - Part 2

kasukanra via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Dive into an in-depth 57-minute tutorial on applying ControlNet 1.1 to character design, focusing on Stable Diffusion techniques. Learn how to set up ControlNet 1.1 models, use the Segment Anything extension, and troubleshoot common issues. Explore advanced inpainting techniques using Grounding DINO and global harmonious inpainting. Discover the potential of instruct pix2pix for targeted variations and its application to PNGtuber creation. Gain insights into the ControlNet 1.1 Tile model for contextual upscaling and compare various upscaling methods. Master the art of compositing and expression creation for character designs. Perfect for digital artists, concept artists, and AI art enthusiasts looking to enhance their character design workflow using cutting-edge machine learning tools.

Syllabus

Intro
Download and place ControlNet 1.1 models in proper directory
Segment Anything extension
Install visual studio build tools if you have any errors regarding pycoco tools
Generate baseline reference using traditional merged inpainting model
Using Grounding DINO to create a semi-supervised inpaint mask
Enable ControlNet 1.1 inpaint global harmonious
ControlNet 1.1 inpainting gotcha #1
ControlNet 1.1 gotcha #2
Tuning the inapinting parameters
Analyzing the new tuned outputs
Compositing ControlNet 1.1 inpaint output in photoshop
ControlNet 1.1 inpaint without Grounding DINO
Exploring ControlNet 1.1 instruct pix2pix for targeted variations
Determining the limitations for ip2p
Using segment anything with ip2p
Applying ip2p + Grounding DINO to PNGtuber
Analyzing the tuned PNGtuber results
ControlNet 1.1 Tile model overview
Applying the tile model to the shipbuilder illustration
Showing the thumbnail tile model generation
Introducing the image that will be used with tile model contextual upscaling
Checking Github issue for more information regarding tile model
Contextual upscaling with ControlNet 1.1 tile model
Comparing upscaler methods tile model, vanilla Ultimate SD Upscale, 4x Ultrasharp
Use tile model upscale on the star pupils chibi
Composite upscaled closed mouth expression
Creating the closed eyes expression
Closing thoughts

Taught by

kasukanra

Reviews

Start your review of Stable Diffusion- Applying ControlNet to Character Design - Part 2

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.