Fine-Tuning Llama 3 on a Custom Dataset for RAG Q&A - Training LLM on a Single GPU

Fine-Tuning Llama 3 on a Custom Dataset for RAG Q&A - Training LLM on a Single GPU

Venelin Valkov via YouTube Direct link

- Conclusion

15 of 15

15 of 15

- Conclusion

Class Central Classrooms beta

YouTube videos curated by Class Central.

Classroom Contents

Fine-Tuning Llama 3 on a Custom Dataset for RAG Q&A - Training LLM on a Single GPU

Automatically move to the next video in the Classroom when playback concludes

  1. 1 - Why fine-tuning?
  2. 2 - Text tutorial on MLExpert.io
  3. 3 - Fine-tuning process overview
  4. 4 - Dataset
  5. 5 - Lllama 3 8B Instruct
  6. 6 - Google Colab Setup
  7. 7 - Loading model and tokenizer
  8. 8 - Create custom dataset
  9. 9 - Establish baseline
  10. 10 - Training on completions
  11. 11 - LoRA setup
  12. 12 - Training
  13. 13 - Load model and push to HuggingFace hub
  14. 14 - Evaluation comparing vs the base model
  15. 15 - Conclusion

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.