Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

DataCamp

Working with Llama 3

via DataCamp

Overview

Explore the latest techniques for running the Llama LLM locally, fine-tuning it, and integrating it within your stack.

Open-source LLMs like Llama can be hosted locally on consumer-grade hardware, enhancing data privacy and reducing costs. Explore the techniques to enable this in this Llama course. Use the model locally, fine-tune it for domain-specific problems with Hugging Face libraries, and integrate it with LangChain to build AI-powered application. Finally, run the model more efficiently using compression techniques. This course is designed for learners with some experience in Hugging Face’s transformers library and familiarity with LLM concepts such as fine-tuning and prompting.

Syllabus

  • Understanding LLMs and Llama
    • The field of large language models has exploded, and Llama is a standout. With Llama 3, possibilities have soared. Explore how it was built, learn to use it with llama-cpp-python, and understand how to craft precise prompts to control the model's behavior.
  • Using Llama Locally
    • Language models are often useful as agents, and in this Chapter, you'll explore how you can leverage llama-cpp-python's capabilities for local text generation and creating agents with personalities. You'll also learn about decoding parameters' impact on output quality. Finally, you'll build specialized inference classes for diverse text generation tasks.
  • Finetuning Llama for Customer Service using Hugging Face & Bitext Dataset
    • Language models are powerful, and you can unlock their full potential with the right techniques. Learn how fine-tuning can significantly improve the performance of smaller models for specific tasks. Dive into fine-tuning smaller Llama models to enhance their task-specific capabilities. Next, discover parameter-efficient fine-tuning techniques such as LoRA, and explore quantization to load and use even larger models.
  • Creating a Customer Service Chatbot with Llama and LangChain
    • LLMs work best when they solve a real-world problem, such as creating a customer service chatbot using Llama and LangChain. Explore how to customize LangChain, integrate fine-tuned models, and craft templates for a real-world use case, utilizing RAG to enhance your chatbot's intelligence and accuracy. This chapter equips you with the technical skills to develop responsive and specialized chatbots.

Taught by

Imtihan Ahmed

Reviews

Start your review of Working with Llama 3

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.