Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Guardrails for Trustworthy AI: Balancing Innovation and Responsibility

DevConf via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the critical aspects of ensuring trustworthy AI systems in this 37-minute conference talk from DevConf.CZ 2024. Delve into the multifaceted approach required to safeguard the integrity and reliability of large language models (LLMs) as presented by speaker Christoph Görn. Examine the current state of Trustworthy AI, focusing on principles of fairness, accountability, transparency, and ethical use. Investigate the challenges posed by LLMs, including bias, interpretability, and potential misuse. Learn practical strategies for implementing guardrails around LLMs, such as developing robust model governance frameworks, leveraging open-source tools, and fostering cross-industry collaboration. Discover how companies and communities can ensure their AI-powered software systems are both innovative and trustworthy through regulatory compliance, continuous monitoring, and cultivating an ethical AI culture. Gain valuable insights into balancing innovation and responsibility in the rapidly evolving field of artificial intelligence.

Syllabus

Guardrails for Trustworthy AI: Balancing Innovation and Responsibility - DevConf.CZ 2024

Taught by

DevConf

Reviews

Start your review of Guardrails for Trustworthy AI: Balancing Innovation and Responsibility

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.