Overview
Explore the potential of Tensor Networks in enhancing the explainability and controllability of Large Language Models (LLMs) in this one-hour seminar. Delve into the literature addressing these factors through modifications to LLM building blocks such as transformer self-attention and multi-layer perceptron layers. Gain additional insights from the "Tensor Networks Meet Neural Networks: A Survey and Future Perspectives" and other relevant papers. Examine case studies including Hypoformer and approaches from Multiverse and Terra Quantum to understand how Tensor Networks can improve the processing of sensitive data in LLMs. Discover the intersection of Tensor Networks and Neural Networks, and consider future perspectives in this rapidly evolving field.
Syllabus
LLM Explainability or Controllability Improvements with Tensor Networks
Taught by
ChemicalQDevice