Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

LinkedIn Learning

LLaMa for Developers

via LinkedIn Learning

Overview

Get an introduction to the architecture, process of fine tuning, deploying, and prompting in the popular open source LLaMa model.

Syllabus

Introduction
  • Developing AI models using LLaMA
1. Introduction to LLaMA
  • Using LLaMA online
  • Running LLaMA in a notebook
  • Accessing LLaMA in an enterprise environment
2. LLaMA Architecture
  • The LLaMA architecture
  • The LLaMA tokenizer
  • The LLaMA context window
  • Differences between LLaMA 1 and 2
3. Fine-Tuning LLaMA
  • Fine-tuning LLaMA with a few examples
  • Fine-tuning LLaMA and freezing layers
  • Fine-tuning with LLaMA using LoRa
  • Reinforcement learning with RLHF and DPO
  • Fine-tuning larger LLaMA models
4. Serving LLaMA
  • Resources required to serve LLaMA
  • Quantizing LLaMA
  • Using TGI for serving LLaMA
  • Using VLLM for serving LLaMA
  • Using DeepSpeed for serving LLaMA
  • Explaining LoRA and SLoRA
  • Using a vendor for serving LLaMA
5. Prompting LLaMA
  • Difference between LLaMA with commercial LLMs
  • Few shot learning with LLaMA
  • Chain of thought with LLaMA
  • Using schemas with LLaMA
  • Optimizing LLaMA prompts with DSPy
  • Challenge: Generating product tags
  • Solution: Generating product tags
Conclusion
  • Continue your LlaMA AI model development journey

Taught by

Denys Linkov

Reviews

4.7 rating at LinkedIn Learning based on 84 ratings

Start your review of LLaMa for Developers

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.