Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Privacy Governance and Explainability in ML - AI

Strange Loop Conference via YouTube

Overview

Explore the complex intersection of privacy governance and explainability in machine learning and artificial intelligence in this 45-minute conference talk from Strange Loop. Delve into the challenges posed by GDPR and other data privacy regulations, particularly in the context of ML and AI systems. Examine methods for enhancing privacy, governing data used in ML/AI, and addressing potential bias in models. Learn about privacy by design, algorithmic fairness, and the role of developers and engineers in ensuring ethical AI practices. Discover techniques such as dynamic sampling, differential privacy, and multidimensional privacy analytics to mitigate privacy risks. Gain insights into building consumer trust and confidence in an increasingly complex technological landscape.

Syllabus

Introduction
Agenda
Why does it matter
Landscape of privacy risk
Privacy is more than security
Fundamental right to privacy
Trust context
Transparency consumer trust
Contextbased privacy
Privacy by design
Governance data optimization maturity
Privacy by design in retrospect
Current state of privacy
Machine learning and AI
Algorithmal fairness
Role of developers engineers
Seeking out risk
Types of data
Methods
Model Prediction Risk
Dynamic Sampling
Differential Privacy
Multidimensional Privacy Analytics
Life Cycle
Conclusion

Taught by

Strange Loop Conference

Reviews

Start your review of Privacy Governance and Explainability in ML - AI

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.