Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Virtual U - Defeating Face Liveness Detection by Building Virtual Models from Your Public Photos

USENIX via YouTube

Overview

Explore a groundbreaking approach to bypassing modern face authentication systems in this 26-minute conference talk from USENIX Security '16. Discover how researchers from the University of North Carolina at Chapel Hill leverage public photos from social media to create realistic, textured 3D facial models that undermine widely used face authentication solutions. Learn about the innovative use of virtual reality (VR) systems to animate facial models, tricking liveness detectors into believing the 3D model is a real human face. Delve into the technical aspects of this VR-based spoofing attack, including base modeling, texture imputation, and eye gaze correction. Examine the experimental data and results that demonstrate the practical nature of this threat to camera-based authentication systems. Consider the implications of this new class of attacks and explore potential mitigations, including texture detection and motion-based defenses. Gain valuable insights into the vulnerabilities of face authentication technologies and the importance of incorporating verifiable data sources in security systems.

Syllabus

Intro
Virtual U
Online Photos
Face Modeling
Base Modeling
Texture Imputation
Correct for Eye Gaze
Experiments
Experimental Data
Results
Observations
Motion Detection Defense
First Experiment
Mitigations
Texture Detection
Conclusion

Taught by

USENIX

Reviews

Start your review of Virtual U - Defeating Face Liveness Detection by Building Virtual Models from Your Public Photos

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.