Understanding Mixture of Experts in Large Language Models

Understanding Mixture of Experts in Large Language Models

Trelis Research via YouTube Direct link

Binary tree MoE fast feed forward

11 of 15

11 of 15

Binary tree MoE fast feed forward

Class Central Classrooms beta

YouTube videos curated by Class Central.

Classroom Contents

Understanding Mixture of Experts in Large Language Models

Automatically move to the next video in the Classroom when playback concludes

  1. 1 GPT-3, GPT-4 and Mixture of Experts
  2. 2 Why Mixture of Experts?
  3. 3 The idea behind Mixture of Experts
  4. 4 How to train MoE
  5. 5 Problems training MoE
  6. 6 Adding noise during training
  7. 7 Adjusting the loss function for router evenness
  8. 8 Is MoE useful for LLMs on laptops?
  9. 9 How might MoE help big companies like OpenAI?
  10. 10 Disadvantages of MoE
  11. 11 Binary tree MoE fast feed forward
  12. 12 Data on GPT vs MoE vs FFF
  13. 13 Inference speed up with binary tree MoE
  14. 14 Recap - Does MoE make sense?
  15. 15 Why might big companies use MoE?

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.