Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

The Era of 1-bit LLMs Explained - BitNet b1.58 and New Scaling Laws

Unify via YouTube

Overview

Explore the groundbreaking research presented in the paper "The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits" during this 58-minute session. Delve into the innovative BitNet b1.58 model, which uses ternary parameters {-1, 0, 1} to match full-precision Transformer LLMs in performance while offering significant cost-effectiveness in latency, memory, throughput, and energy consumption. Discover how this 1.58-bit LLM establishes a new scaling law and training recipe for high-performance, cost-effective large language models. Gain insights from the research led by Shuming Ma and Hongyu Wang at Microsoft, and understand its potential impact on the future of AI development. Learn about additional resources for staying updated on AI research, industry trends, and deployment strategies.

Syllabus

The Era of 1-bit LLMs Explained

Taught by

Unify

Reviews

Start your review of The Era of 1-bit LLMs Explained - BitNet b1.58 and New Scaling Laws

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.