Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

The Myth of AI Breakthroughs - Cutting Through Hype in Neural Network Research

MLOps.community via YouTube

Overview

Dive into a thought-provoking podcast episode featuring Jonathan Frankle, Chief Scientist (Neural Networks) at Databricks, as he demystifies AI breakthroughs and shares insights on rigorous AI testing and efficient model training. Explore topics such as face recognition systems, the 'lottery ticket hypothesis,' and robust decision-making in model training. Learn about Frankle's transition to teaching law, the importance of scientific discourse, and his experiences with GPUs. Gain valuable perspectives on cutting through AI hype, understanding the realities of AI applications, and developing more efficient neural network training algorithms. Discover the challenges in facial recognition technology, the intricacies of sparse networks, and the balance between automation and human involvement in decision-making processes.

Syllabus

[] Jonathan's preferred coffee
[] Takeaways
[] LM Avalanche Panel Surprise
[] Adjunct Professor of Law
[] Low facial recognition accuracy
[] Automated decision making human in the loop argument
[] Control vs. Outsourcing Concerns
[] perpetuallineup.org
[] Face Recognition Challenges
[] The lottery ticket hypothesis
[] Mosaic Role: Model Expertise
[] Expertise Integration in Training
[] SLURM opinions
[] GPU Affinity
[] Breakthroughs with QStar
[] Deciphering the noise advice
[] Real Conversations
[] How to cut through the noise
[] Research Iterations and Timelines
[] User Interests, Model Limits
[] Debugability
[] Wrap up

Taught by

MLOps.community

Reviews

Start your review of The Myth of AI Breakthroughs - Cutting Through Hype in Neural Network Research

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.