Preventing Toxicity and Unconscious Biases Using Large Language and Deep Learning Models
OpenInfra Foundation via YouTube
Overview
Learn how to predict and prevent unconscious biases in AI models through a 40-minute conference talk presented by Armstrong Foundjem at the OpenInfra Foundation. Explore the critical importance of developing fair, interpretable models for high-stakes decision-making in healthcare, finance, and justice systems. Discover techniques for early bias detection in multi-class and multi-label problems using large language models to classify diverse data across languages, cultures, religions, ages, and genders. Examine the implementation of fine-tuned BERT transformers for complex NLP tasks, achieving 98.7% accuracy in bias prediction through contextual text embedding and task-specific classification. Address the challenges of identifying biases in distributed online communities with complex data sources while learning practical solutions for building more trustworthy AI systems.
Syllabus
How large language and deep learning models can prevent toxicity such as unconscious biases
Taught by
OpenInfra Foundation