Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the ethical and social implications of Text-to-Image (T2I) models in this 26-minute conference talk by Nithish Kannen, an AI resident at Google DeepMind. Delve into the cultural disparities these models can inadvertently perpetuate due to their development in mono-cultural environments. Learn about innovative methods for assessing the cultural competence of T2I models, best practices for creating evaluation resources, and early efforts to develop AI that caters to diverse global communities. Gain insights into machine learning, natural language processing, and multimodal models as Kannen discusses topics such as explaining ML outcomes, interpretability versus explainability, model-agnostic methods, and the importance of explainable AI. This talk is essential for those interested in AI fairness, data science, machine learning, and the broader implications of artificial intelligence in society.
Syllabus
- Introduction
- How to explain ML outcomes?
- Interpretability vs Explainability
- Model-Agnostic Methods
- What makes Ai Explainable?
- The Missing Link
- Why explain ML Models?
Taught by
Open Data Science