Doubling Stable Diffusion Inference Speed with RTX Acceleration and TensorRT - A Comprehensive Guide

Doubling Stable Diffusion Inference Speed with RTX Acceleration and TensorRT - A Comprehensive Guide

Software Engineering Courses - SE Courses via YouTube Direct link

Introduction to how to utilize RTX Acceleration / TensorRT for 2x inference speed

1 of 32

1 of 32

Introduction to how to utilize RTX Acceleration / TensorRT for 2x inference speed

Class Central Classrooms beta

YouTube playlists curated by Class Central.

Classroom Contents

Doubling Stable Diffusion Inference Speed with RTX Acceleration and TensorRT - A Comprehensive Guide

Automatically move to the next video in the Classroom when playback concludes

  1. 1 Introduction to how to utilize RTX Acceleration / TensorRT for 2x inference speed
  2. 2 How to do a fresh installation of Automatic1111 SD Web UI
  3. 3 How to enable quick SD VAE and SD UNET selections from settings of Automatic1111 SD Web UI
  4. 4 How to install TensorRT extension to hugely speed up Stable Diffusion image generation
  5. 5 How to start / run Automatic1111 SD Web UI
  6. 6 How to install TensorRT extension manually via URL install
  7. 7 How to install TensorRT extension via git clone method
  8. 8 How to download and upgrade cuDNN files
  9. 9 Speed test of SD 1.5 model without TensorRT
  10. 10 How to generate a TensorRT for a model
  11. 11 Explanation of min, optimal, max settings when generating a TensorRT model
  12. 12 Where is ONNX file is exported
  13. 13 How to set command line arguments to not get any errors during TensorRT generation
  14. 14 How to get maximum performance when generating and using TensorRT
  15. 15 How to start using generated TensorRT for almost double speed
  16. 16 How to switch to dev branch of Automatic1111 SD Web UI for SDXL TensorRT usage
  17. 17 The comparison of image difference between TensoRT on and off
  18. 18 Speed test of TensorRT with multiple resolutions
  19. 19 Generating a TensorRT for Stable Diffusion XL SDXL
  20. 20 How to verify you have switched to dev branch of Automatic1111 Web UI to make SDXL TensorRT work
  21. 21 Generating images with SDXL TensorRT
  22. 22 How to generate TensorRT for your DreamBooth trained model
  23. 23 How to install After Detailer ADetailer extension and what does it do explanation
  24. 24 Starting generation of TensorRT for SDXL
  25. 25 Batch size vs batch count difference
  26. 26 How to train amazing SDXL DreamBooth model
  27. 27 How to get amazing prompt list for DreamBooth models and use them
  28. 28 The dataset I used for DreamBooth training myself and why it is deliberately low quality
  29. 29 How to generate TensorRT for LoRA models
  30. 30 Where and how to see TensorRT profiles you have for each model
  31. 31 Generating LoRA TensorRT for SD 1.5 and testing it
  32. 32 How to fix TensorRT LoRA not being effective bug

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.