Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Trying Out Flan 20B with UL2 - Working in Colab with 8-Bit Inference

Sam Witteveen via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the capabilities of Google's latest publicly released Flan model, Flan-UL2 20 Billion, in this informative video tutorial. Learn how to run the model on a high-end Google Colab using the HuggingFace Library and 8-Bit inference. Discover the model's performance in various tasks, including chain of thought prompting, zero-shot logical reasoning, generation, story writing, common sense reasoning, and speech writing. Gain insights into loading the model, comparing non-8Bit and 8Bit inference, testing large token spans, and utilizing the HuggingFace Inference API. Follow along with the provided Colab notebook to experiment with this powerful language model firsthand.

Syllabus

Flan-20B-UL2 Launched
Loading the Model
Non 8Bit Inference
8Bit inference with CoT
Chain of Thought Prompting
Zeroshot Logical Reasoning
Zeroshot Generation
Zeroshot Story Writing
Zeroshot Common Sense Reasoning
Zeroshot Speech Writing
Testing a Large Token Span
Using the HuggingFace Inference API

Taught by

Sam Witteveen

Reviews

Start your review of Trying Out Flan 20B with UL2 - Working in Colab with 8-Bit Inference

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.