Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

How Replit Trained Their Own LLMs - LLM Bootcamp

The Full Stack via YouTube

Overview

Explore the comprehensive process of training custom Large Language Models (LLMs) in this 32-minute conference talk by Reza Shabani from Replit. Gain insights into the entire workflow, from data processing to deployment, including the modern LLM stack, data pipelines using Databricks and Hugging Face, preprocessing techniques, tokenizer training, and running training with MosaicML and Weights & Biases. Learn about testing and evaluation methods using HumanEval and Hugging Face, as well as deployment strategies involving FasterTransformer, Triton Server, and Kubernetes. Discover valuable lessons on data-centrism, evaluation, and collaboration, and understand the qualities that make an effective LLM engineer.

Syllabus

Why train your own LLMs?
The Modern LLM Stack
Data Pipelines: Databricks & Hugging Face
Preprocessing
Tokenizer Training
Running Training: MosaicML, Weights & Biases
Testing & Evaluation: HumanEval, Hugging Face
Deployment: FasterTransformer, Triton Server, k8s
Lessons learned: data-centrism, eval, and collaboration
What makes a good LLM engineer?

Taught by

The Full Stack

Reviews

Start your review of How Replit Trained Their Own LLMs - LLM Bootcamp

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.