Embeddings vs Fine-Tuning - Part 1: Understanding and Implementing Embeddings

Embeddings vs Fine-Tuning - Part 1: Understanding and Implementing Embeddings

Trelis Research via YouTube Direct link

How to prepare data for embedding?

7 of 26

7 of 26

How to prepare data for embedding?

Class Central Classrooms beta

YouTube videos curated by Class Central.

Classroom Contents

Embeddings vs Fine-Tuning - Part 1: Understanding and Implementing Embeddings

Automatically move to the next video in the Classroom when playback concludes

  1. 1 Should I use embeddings or fine-tuning?
  2. 2 How does semantic search work?
  3. 3 How to use embeddings with a language model?
  4. 4 The two keys to success with embeddings
  5. 5 How do cosine similarity and dot product similarity work?
  6. 6 How to embed a dataset? Touch Rugby Rules
  7. 7 How to prepare data for embedding?
  8. 8 Chunking a dataset for embeddings
  9. 9 What length of embeddings should I use?
  10. 10 Loading Llama 2 13B with GPTQ in Google Colab
  11. 11 Installing Llama 2 13B with GPTQ
  12. 12 Llama Performance without Embeddings
  13. 13 What embeddings should I use?
  14. 14 How to use OpenAI Embeddings
  15. 15 Using SBERT or "Marco" embeddings
  16. 16 How to create embeddings from data.
  17. 17 Calculating similarity using the dot product
  18. 18 Evaluating performance using embeddings
  19. 19 Using ChatGPT to Evaluate Performance of Embeddings
  20. 20 Llama 13-B Incorrect, GPT-4 Correct
  21. 21 Llama 13-B and GPT-4 Incorrect
  22. 22 Embeddings incorrect AND Llama 13B and GPT-4 Hallucinate
  23. 23 Summary of Embeddings Performance with Llama 2 and GPT-4
  24. 24 Pro tips for further improving performance with embeddings
  25. 25 ColBERT approach to improve embeddings
  26. 26 Top Tips for using Embeddings with Language Models

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.