Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Running Open Large Language Models in Production with Ollama and Serverless GPUs

Devoxx via YouTube

Overview

Explore the deployment of open large language models in production environments using Ollama and serverless GPUs. Learn why companies are increasingly interested in running open models like Gemma and Llama, which offer full control over deployment, model upgrades, and data privacy. Discover how to leverage Ollama, a popular open-source LLM inference server, for both local and containerized environments. Gain practical insights into deploying an application that utilizes an open model with Ollama on Cloud Run, featuring scale-to-zero capabilities and serverless GPUs. This 43-minute talk from Devoxx provides valuable knowledge for organizations looking to harness the power of open LLMs while maintaining control over their AI infrastructure.

Syllabus

Running open large language models in production with Ollama and serverless GPUs by Wietse Venema

Taught by

Devoxx

Reviews

Start your review of Running Open Large Language Models in Production with Ollama and Serverless GPUs

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.