Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the performance of local large language models across a diverse range of hardware in this comprehensive video comparison. Witness firsthand testing of Llama 3.1 and Llama 3.2 using Ollama on devices ranging from a budget-friendly Raspberry Pi 4 to a high-end Dell AI workstation. Learn how to install Ollama on Windows and observe the capabilities of various systems, including a Herk Orion Mini PC, a gaming rig with a Threadripper 3970X, an M2 Mac Pro, and a powerful workstation featuring a Threadripper 7995WX with NVIDIA 6000 Ada GPUs. Discover which hardware configurations excel at handling different model sizes, including the massive 405-billion parameter model. Gain insights into the practical aspects of running LLMs locally and understand the trade-offs between cost and performance across this wide spectrum of computing devices.