Unlock Faster and More Efficient LLMs with SparseGPT - Neural Magic

Unlock Faster and More Efficient LLMs with SparseGPT - Neural Magic

Neural Magic via YouTube Direct link

Massive Deep Models are Great

2 of 15

2 of 15

Massive Deep Models are Great

Class Central Classrooms beta

YouTube videos curated by Class Central.

Classroom Contents

Unlock Faster and More Efficient LLMs with SparseGPT - Neural Magic

Automatically move to the next video in the Classroom when playback concludes

  1. 1 Intro
  2. 2 Massive Deep Models are Great
  3. 3 The Neural Network Pruning Problem
  4. 4 The Mathematics of Compression
  5. 5 One-Shot Compression of GPT Models
  6. 6 The General Approach
  7. 7 Our Approach: Quantization Version
  8. 8 Experimental Validation
  9. 9 Combining Sparsity and Quantization
  10. 10 Exploiting with DeepSparse
  11. 11 Software Beats Hardware (continued)
  12. 12 Transforming the Pareto Frontier
  13. 13 Enabling Anyone to Run
  14. 14 Enabling Anyone to Sparsify
  15. 15 Questions

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.