Autogen and Local LLMs Create Realistic Stable Diffusion Model Autonomously

Autogen and Local LLMs Create Realistic Stable Diffusion Model Autonomously

kasukanra via YouTube Direct link

Using replicate's original llava v1.5-13b model

41 of 53

41 of 53

Using replicate's original llava v1.5-13b model

Class Central Classrooms beta

YouTube videos curated by Class Central.

Classroom Contents

Autogen and Local LLMs Create Realistic Stable Diffusion Model Autonomously

Automatically move to the next video in the Classroom when playback concludes

  1. 1 Introduction
  2. 2 Technical Design Flowchart
  3. 3 Installing Chrome
  4. 4 Chromedriver not available and how to fix it
  5. 5 Testing selenium webdriver
  6. 6 Autogen code overview
  7. 7 AI Agents more in-depth
  8. 8 Fetch image overview
  9. 9 Accessing page source
  10. 10 Gotcha with page source
  11. 11 Renaming low resolution link to highest resolution link
  12. 12 Testing the fetch_images script
  13. 13 Revisiting the Autogen code
  14. 14 Autogen in action
  15. 15 Checking the downloaded images
  16. 16 Organizing the images
  17. 17 Using Topaz Gigapixel AI to upscale images
  18. 18 Loading LLM framework overview
  19. 19 Installing text-generation-webui
  20. 20 Showing git hash for text-generation-webui
  21. 21 Downloading llava-v1.5-13b-GPTQ
  22. 22 Support LLaVA v1.5 pull request
  23. 23 Commit log for LLaVA v1.5
  24. 24 Original LLaVA v1.5-13b repository
  25. 25 Possible ? to load llava-v1.5-13b using --load-in-4bit flag in the readme
  26. 26 Downloading the model through CLI
  27. 27 Model Placement in text-generation-webui directory
  28. 28 Multimodal documentation for starting up API
  29. 29 Command to start the server
  30. 30 text-generation-webui GUI
  31. 31 Looking at pull request to see suggested settings
  32. 32 Changing presets in text-generation-webui
  33. 33 Initial trials in the GUI
  34. 34 Comparing concise and verbose prompt instruction
  35. 35 Testing out the text-generation-webui API
  36. 36 Get the IP address of windows from inside Linux
  37. 37 Finding the endpoint/API examples
  38. 38 Testing the API request
  39. 39 Comparing results between the API and the GUI
  40. 40 llava v1.5-13b responding in another language, hallucination ?
  41. 41 Using replicate's original llava v1.5-13b model
  42. 42 Bringing up concise vs. verbose prompt again
  43. 43 Setting up replicate API key locally
  44. 44 Setting up python call to replicate
  45. 45 Running iterate replicate code
  46. 46 Download llava-v1.5-7b model
  47. 47 Setting up llama.cpp framework
  48. 48 Adding models to llama.cpp
  49. 49 Showing llama.cpp commit hash
  50. 50 Starting up llama.cpp server
  51. 51 llama.cpp GUI
  52. 52 llama.cpp API code overview
  53. 53 llama.cpp server API documentation

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.