Completed
Why I trained myself and a style into SD 1.5 for video to animation
Class Central Classrooms beta
YouTube videos curated by Class Central.
Classroom Contents
Video to Anime - Generate an Epic Animation from Your Phone Recording by Using Stable Diffusion AI
Automatically move to the next video in the Classroom when playback concludes
- 1 How to turn a video into an animation in a fully automated manner for free
- 2 Introduction to Davinci Resolve free edition
- 3 Introduction to FFmpeg
- 4 Short tutorial video for Davinci Resolve
- 5 How to prepare your real video footage
- 6 How to change timeline resolution in Davinci Resolve
- 7 Imported video image scaling Davinci Resolve - mismatched resolution files
- 8 Edit tab and video importing in Davinci Resolve
- 9 Where to see properties of your video in Davinci Resolve
- 10 How to crop a video into square or any aspect ratio with Davinci Resolve
- 11 Negative side of using distant position
- 12 How to export / render your video in Davinci Resolve with best settings
- 13 How to export all frames of a video by using FFmpeg
- 14 Parameters of extracting all frames of a video by using FFmpeg
- 15 File names importance for batch processing scripts
- 16 How to install and use Automatic1111 Web UI
- 17 Which training dataset I made to train myself from video exported frames
- 18 Why and how I used RunPod IO for training
- 19 Why I trained myself and a style into SD 1.5 for video to animation
- 20 How to train yourself and a style tutorial
- 21 How to do 2 concept training in Stable Diffusion 1.5 DreamBooth
- 22 What DreamBooth settings I have used to train myself and style into SD 1.5 in this tutorial
- 23 Master tutorial for DreamBooth
- 24 What is ControlNet, how to install and use it
- 25 Settings change of ControlNet to use in video to anime process
- 26 Multi-frame video rendering for Stable Diffusion script for video to animation with consistency
- 27 How install external scripts to Automatic1111 Web UI
- 28 How to change commit version of git repos, ex. Automatic1111 web UI
- 29 When you are ready to start processing real video frames into anime
- 30 You don't have to do pre-training to follow this tutorial
- 31 First step of video to animation
- 32 Importance of first generated frame
- 33 How to generate your first converted driving frame
- 34 What kind of initial frame to image conversation you should aim
- 35 What is the difference if we don't train ourselves and just use a custom model
- 36 Next step after you got first converted frame of your anime video
- 37 The settings used in img2img tab for batch frame processing
- 38 Multi-frame Video rendering for StableDiffusion settings
- 39 How to upscale all video to animation generated frames
- 40 How to fix naming of the batch generated images for upscaling
- 41 How to do batch upscaling by AI in Automatic1111 Web UI
- 42 How to improve faces, eyes, when doing upscaling by AI
- 43 How to animate generated frame images
- 44 How to import images as an image sequence into Davinci Resolve for animating
- 45 Clip attributes fixing
- 46 First time playing our animated video
- 47 How to improve flickering problem with a very simple trick
- 48 How to move clip frame by frame in Davinci Resolve
- 49 Which composite mode to reduce flickering problem in Davinci Resolve
- 50 How to apply deflickering in Davinci Resolve
- 51 How animation made in this video could have been improved significantly
- 52 Which other technique I have tested - img2img alternative test
- 53 img2img alternative test video to animation results
- 54 Searched for freely available deflickering tools - models - libraries
- 55 All-In-One-Deflicker
- 56 My videos have fully corrected subtitles