Explore the implementation of Large Language Models (LLMs) on the Polaris supercomputer in this informative tutorial presented by Sam Foreman. Gain insights into the challenges and opportunities of running advanced AI models on high-performance computing systems. Learn about the specific considerations for deploying LLMs on Polaris, including optimization techniques, resource allocation, and scaling strategies. Discover how researchers and data scientists can leverage supercomputing capabilities to enhance the performance and capabilities of language models. Delve into practical examples and best practices for maximizing the potential of LLMs in a supercomputing environment.
Overview
Syllabus
Sam Foreman: LLMs on Polaris (Tutorial 5)
Taught by
MICDE University of Michigan