Explore critical questions about open source AI in this thought-provoking conference talk from OSA Con 2023, where Dr. Chris Hazard examines the true meaning of open source in artificial intelligence. Delve into complex issues surrounding AI transparency, including the implications of publishing neural network weights versus source code, the challenges of model data memorization, attribution problems, and the reliability of AI decision explanations. Learn about alternative approaches to black box AI and machine learning that better align with open source principles, understand the concept of "programming with data," and discover strategies to address intellectual debt in AI development. Gain insights into debugging AI systems, ensuring genuine transparency, and creating truly open source artificial intelligence solutions that uphold the movement's core values.
Overview
Syllabus
Most "Open Source" AI Isn't. And What We Can Do About That.
Taught by
OSACon