Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Preventing Toxicity and Unconscious Biases Using Large Language and Deep Learning Models

OpenInfra Foundation via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Learn how to predict and prevent unconscious biases in AI models through a 40-minute conference talk presented by Armstrong Foundjem at the OpenInfra Foundation. Explore the critical importance of developing fair, interpretable models for high-stakes decision-making in healthcare, finance, and justice systems. Discover techniques for early bias detection in multi-class and multi-label problems using large language models to classify diverse data across languages, cultures, religions, ages, and genders. Examine the implementation of fine-tuned BERT transformers for complex NLP tasks, achieving 98.7% accuracy in bias prediction through contextual text embedding and task-specific classification. Address the challenges of identifying biases in distributed online communities with complex data sources while learning practical solutions for building more trustworthy AI systems.

Syllabus

How large language and deep learning models can prevent toxicity such as unconscious biases

Taught by

OpenInfra Foundation

Reviews

Start your review of Preventing Toxicity and Unconscious Biases Using Large Language and Deep Learning Models

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.