Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

NPTEL

Responsible & Safe AI Systems

NPTEL via Swayam

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
ABOUT THE COURSE: There has been an exponential increase in the use of platforms / technologies like ChatGPT, Gemini, Llama, Sora, DALL-E, etc. in our day-to-day lives. There Language & Vision models have changed our way of living, and way we seek & create information. This course provides students with a comprehensive understanding of the ethical, social, and safety considerations essential for developing and deploying artificial intelligence (AI) systems. Uncover the intricacies of algorithmic transparency, fairness in machine (un)learning, interpretability, consistency and many more. The course encourages critical thinking and fosters a deep appreciation for the impact of AI on individuals and communities. Students who complete the course can: recognize possible harms that can be caused by modern AI capabilities; learn to reason about various perspectives on the trajectory of AI development and proliferation; learn about latest research agendas towards making AI systems safer.INTENDED AUDIENCE: Anybody interested in the area of AI, Machine Learning, including industry professionals and studentsPREREQUISITES: Any level of machine learning / AI course would help, it is not mandatory thoughINDUSTRY SUPPORT: TCS, Wipro, Microsoft, Infosys, Amazon, Uber, to name a few, any company involved in AI & ML will be interested

Syllabus

Week 1 &2:
  • AI Capabilities Improvement in last 5-10 years
  • Imminent risks from AI Models: Toxicity, bias, goal misspecification, adversarial examples etc.
  • Long-term risks from AI Models: Misuse, Misgeneralization, Rogue AGI
  • Principles of RAI - Transparency; Accountability; Safety, Robustness and Reliability;Privacy and Security; Fairness and non-discrimination; Human-Centred Values; Inclusive
    and Sustainable development, Interpretability
  • Recap of Deep Learning Techniques, Language/Vision Models
  • AI Risks for Gen models
  • Adversarial Attacks – Vision, NLP, Superhuman Go agents
Week 3 &4:
  • ML Poisoning Attacks like Trojans
  • Implications for current and future AI safety
  • Explainability
  • Imminent and Long-term potential for transparency techniques
  • Mechanistic Interpretability
  • Representation Engineering, model editing and probing
  • Critiques of Transparency for AI Safety
Week 5 &6:
  • Privacy & Fairness in AI
Week 7 &8:
  • Metrics and Tools for RAI - measuring bias/fairness, adversarial testing, explanations (Lime/SHAP/GradCam), audit mechanisms
  • Regulation landscape - DPDP act (India), GDPR (EU), EU AI act, US presidential declaration, Ethical approvals, informed consent, participatory design, future of work, Indian context
  • What is AGI? When could it be achieved?
  • Instrumental Convergence: Power Seeking, Deception etc.
Week 9 &10:
  • RAI in Legal domain
  • RAI in Health care domain
  • RAI in Education domain
  • A few other domains
  • Policy issues in RAI
Week 11&12:
  • Couple of panel discussion with industry practitioners, academic, government (possibly), and others.
  • Fireside chat with eminent personalities
  • Recorded Paper reading discussion

Taught by

Prof. Ponnurangam Kumaraguru, Prof. Balaraman Ravindran, Prof. Arun Rajkumar

Reviews

Start your review of Responsible & Safe AI Systems

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.