Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Self-Cannibalizing AI: Exposing Generative Text-to-Image Models

media.ccc.de via YouTube

Overview

Explore artistic strategies for exposing generative text-to-image models in this 54-minute conference talk from the 37C3 event. Delve into the complex world of AI image generation, examining how machines learn from each other and engage in self-cannibalism within the generative process. Investigate the inner workings of image-generation models through techniques like feedback, misuse, and hacking. Learn about experiments on Stable-Diffusion pipelines, manipulation of aesthetic scoring in public text-to-image datasets, NSFW classification, and the use of Contrastive Language-Image Pre-training (CLIP) to reveal biases and problematic correlations. Discover how datasets and machine-learning models are filtered and constructed, and examine the implications of these processes. Explore the limitations and tendencies of generative AI models, including their ability to reproduce input images and their default patterns. Join speakers Ting-Chun Liu and Leon-Etienne Kühr as they share insights on the political discourses surrounding generative AI and the challenges of understanding increasingly complex datasets and models.

Syllabus

37C3 - Self-cannibalizing AI

Taught by

media.ccc.de

Reviews

Start your review of Self-Cannibalizing AI: Exposing Generative Text-to-Image Models

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.