Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Linux Foundation

How to Build Trustworthy AI with Open Source

Linux Foundation via YouTube

Overview

Explore the concept of Trustworthy AI and its implementation using open source tools in this informative conference talk. Delve into the rapid adoption of AI in various aspects of life and the growing need for mature, trustworthy AI systems. Examine the efforts of research communities and governmental bodies in defining guidelines and principles for Responsible AI, Ethical AI, and Trustworthy AI. Learn about the Trusted AI ecosystem and witness demonstrations of existing open source projects, including Adversarial Robustness Toolbox, AI Fairness, and AI Explainability. Discover how to leverage these tools to incorporate accountability into your machine learning lifecycle and enhance the trustworthiness of your AI systems. Gain insights into AI's potential, ethical considerations, explainable AI, security vulnerabilities, and the Linux Validation AI framework. Participate in hands-on exercises to familiarize yourself with the ecosystem, project categories, and task definitions. Explore neural networks, adversarial examples, and robustness evaluation techniques to build more reliable and ethical AI solutions.

Syllabus

Introduction
AI has a great potential
Are AI riskfree
Ethical Problems
Explainable AI
Security and privacy vulnerabilities
Trustworthy AI
Linux Validation AI
How to Achieve Trustworthy AI
Exercise
Getting familiar with the ecosystem
Project categories
Task definition
Task explanation
Robustness adversary evaluation
Notebook
Neural Network
Results
adversarial example

Taught by

Linux Foundation

Reviews

Start your review of How to Build Trustworthy AI with Open Source

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.