Top Ten Tips for Fine-tuning Large Language Models

Top Ten Tips for Fine-tuning Large Language Models

Trelis Research via YouTube Direct link

Tip 7: Start by only training on one GPU

8 of 23

8 of 23

Tip 7: Start by only training on one GPU

Class Central Classrooms beta

YouTube videos curated by Class Central.

Classroom Contents

Top Ten Tips for Fine-tuning Large Language Models

Automatically move to the next video in the Classroom when playback concludes

  1. 1 Top Ten Fine-tuning Tips
  2. 2 Tip 1: Start with a Small Model
  3. 3 Tip 2: Use LoRA or QLoRA
  4. 4 Tip 3: Create 10 manual questions
  5. 5 Tip 4: Create datasets manually
  6. 6 Tip 5: Start training with just 100 rows
  7. 7 Tip 6: Always create a validation data split
  8. 8 Tip 7: Start by only training on one GPU
  9. 9 Tip 8: Use weights and biases for logging
  10. 10 Scale up rows, tuning type, then model size
  11. 11 Tip 9: Consider unsupervised fine-tuning if you've lots of data
  12. 12 Tip 10: Use preference fine-tuning ORPO
  13. 13 Recap of the ten tips
  14. 14 Ten tips applied to multi-modal fine-tuning
  15. 15 Playlists to watch
  16. 16 Trelis repo overview
  17. 17 ADVANCED Fine-tuning repo Trelis.com/ADVANCED-fine-tuning
  18. 18 Training on completions only
  19. 19 ADVANCED fine-tuning repo CONTINUED
  20. 20 ADVANCED vision Trelis.com/ADVANCED-vision
  21. 21 ADVANCED inference trelis.com/enterprise-server-api-and-inference-guide/
  22. 22 ADVANCED transcription trelis.com/ADVANCED-transcription
  23. 23 Support + Resources Trelis.com/About

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.