Overview
Dive into a comprehensive crash course on generative AI fine-tuning techniques for Large Language Models (LLMs). Explore quantization, QLORA, LORA, and 1-bit LLM concepts through theoretical and practical insights. Learn to fine-tune popular models like LLama2 and Google Gemma, build no-code LLM pipelines, and customize models with your own data. Gain hands-on experience with provided code examples and in-depth explanations of advanced techniques in natural language processing and machine learning.
Syllabus
Introduction
Quantization Intuition
Lora And QLORA Indepth Intuition
Finetuning With LLama2
1 bit LLM Indepth Intuition
Finetuning with Google Gemma Models
Building LLm Pipelines With No code
Fine tuning With Own Cutom Data
Taught by
Krish Naik