Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Learn the fundamentals of large language models (LLMs) and put them into practice by deploying your own solutions based on open source models. By the end of this course, you will be able to leverage state-of-the-art open source LLMs to create AI applications using a code-first approach.
You will start by gaining an in-depth understanding of how LLMs work, including model architectures like transformers and advancements like sparse expert models. Hands-on labs will walk you through launching cloud GPU instances and running pre-trained models like Code Llama, Mistral, and stable diffusion.
The highlight of the course is a guided project where you will fine-tune a model like LLaMA or Mistral on a dataset of your choice. You will use SkyPilot to easily scale model training on low-cost spot instances across cloud providers. Finally, you will containerize your model for efficient deployment using model servers like LoRAX and vLLM.
By the end of the course, you will have first-hand experience leveraging open source LLMs to build AI solutions. The skills you gain will enable you to further advance your career in AI.