Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a comprehensive lecture on fine-tuning flow and diffusion generative models presented by Carles Domingo-Enrich from Valence Labs. Delve into the theoretical foundations and practical applications of improving dynamical generative models through reward fine-tuning. Learn about the novel approach of casting reward fine-tuning as stochastic optimal control (SOC) and discover the importance of enforcing a specific memoryless noise schedule during the fine-tuning process. Examine the newly proposed Adjoint Matching algorithm and its advantages over existing SOC algorithms. Gain insights into how this approach significantly enhances consistency, realism, and generalization to unseen human preference reward models while maintaining sample diversity. Access the related research paper and connect with the AI for drug discovery community through the provided Portal link for further discussions and networking opportunities.
Syllabus
Fine-tuning Flow and Diffusion Generative Models | Carles Domingo-Enrich
Taught by
Valence Labs