Dive into the world of generative AI and learn how to select the right model for your needs in this practical course. You'll gain a solid understanding of how generative AI models work and compare deployment options like web APIs, hosted solutions, and local installations.
By the end of this course, you will be able to:
• Describe the basic architecture of generative AI models
• Compare different AI model deployment options
• Evaluate AI models using benchmarks and custom assessments
• Troubleshoot and improve model performance
• Determine when to use in-context learning vs. retrieval augmented generation
Through hands-on exercises, you'll learn to evaluate models using industry benchmarks and create custom assessments for your specific use cases. You'll also master techniques to troubleshoot and enhance model performance.
What sets this course apart is its focus on real-world application - you'll leave equipped to make informed decisions about AI model selection and optimization for your projects. Whether you're new to AI or looking to deepen your knowledge, this course will empower you to leverage generative AI effectively.
Overview
Syllabus
- Lesson 1 Course and Instructor Introduction
- Meet Professor Jesse Spencer-Smith, an experienced practitioner in the field of artificial intelligence. Learn about the course structure and its significance in today's AI-driven world. This lesson sets the foundation for understanding the critical role of model selection in AI implementation and introduces the key concepts you'll master throughout the course.
- Lesson 2: Welcome to the World of AI Models
- Dive into the vast and diverse world of AI models. You'll explore the basic architecture of generative AI, understanding key components like tokenization, semantic spaces, and the decoder stack. This lesson covers the variations in model capabilities, from multimodal processing to long-context understanding. You'll also examine different deployment options, including web access, APIs, and local hosting, understanding the trade-offs in terms of security, cost, and customizability.
- Lesson 3: Comparing Models
- Learn how to effectively evaluate and compare AI models for your specific needs. This lesson introduces you to benchmarking techniques, including industry-standard leaderboards and their limitations. You'll discover how to address challenges like the ceiling effect and contamination in model evaluation. Importantly, you'll learn to create your own benchmarks tailored to your unique tasks, ensuring you can accurately assess model performance for your particular use case.
- Lesson 4: When No Model is Good Enough
- Sometimes, off-the-shelf models don't meet all your requirements. In this final lesson, you'll explore strategies for enhancing model performance. Learn about prompt engineering, in-context learning, and data augmentation techniques. Dive deep into Retrieval Augmented Generation (RAG) and understand its advantages and limitations compared to long-context models. By the end of this lesson, you'll have a toolkit of strategies to optimize AI model performance for your specific needs.
Taught by
Jesse Spencer-Smith