PUBLICATION
Deep Relightable Appearance Models for Animatable Faces
SIGGRAPH
August 9, 2021
By: Sai Bi, Stephen Lombardi, Shunsuke Saito, Tomas Simon, Shih-En Wei, Kevyn Mcphail, Ravi Ramamoorthi, Yaser Sheikh, Jason Saragih
Abstract
We present a method for building high-fidelity animatable 3D face models that can be posed and rendered with novel lighting environments in real-time. Our main insight is that relightable models trained to produce an image lit from a single light direction can generalize to natural illumination conditions but are computationally expensive to render. On the other hand, efficient, high-fidelity face models trained with point-light data do not generalize to novel lighting conditions. We leverage the strengths of each of these two approaches. We first train an expensive but generalizable model on point-light illuminations, and use it to generate a training set of high-quality synthetic face images under natural illumination conditions. We then train an efficient model on this augmented dataset, reducing the generalization ability requirements. As the efficacy of this approach hinges on the quality of the synthetic data we can generate, we present a study of lighting pattern combinations for dynamic captures and evaluate their suitability for learning generalizable relightable models. Towards achieving the best possible quality, we present a novel approach for generating dynamic relightable faces that exceeds state-of-the-art performance. Our method is capable of capturing subtle lighting effects and can even generate compelling near-field relighting despite being trained exclusively with far-field lighting data. Finally, we motivate the utility of our model by animating it with images captured from VR-headset mounted cameras, demonstrating the first system for face-driven interactions in VR that uses a photorealistic relightable face model.
Download Paper
Areas
AR/VR
COMPUTATIONAL PHOTOGRAPHY & INTELLIGENT CAMERAS
COMPUTER VISION
Share
Related Publications
ICSA - November 6, 2019
Auralization systems for simulation of augmented reality experiences in virtual environments
Peter Dodds, Sebastià V. Amengual Garí, W. Owen Brimijoin, Philip W. Robinson
Journal of the Audio Engineering Society - July 20, 2021
Six-Degrees-of-Freedom Parametric Spatial Audio Based on One Monaural Room Impulse Response
Johannes M. Arend, Sebastià V. Amengual Garí, Carl Schissler, Florian Klein, Philip W. Robinson
ACM Transactions on Applied Perception Journal (ACM TAP) - September 16, 2021
Evaluating Grasping Visualizations and Control Modes in a VR Game
Alex Adkins, Lorraine Lin, Aline Normoyle, Ryan Canales, Yuting Ye, Sophie Jörg
ACM MM - October 20, 2021
EVRNet: Efficient Video Restoration on Edge Devices
Sachin Mehta, Amit Kumar, Fitsum Reda, Varun Nasery, Vikram Mulukutla, Rakesh Ranjan, Vikas Chandra
All Publications
Additional Resources
Videos
Downloads & Projects
Visiting Researchers & Postdocs
Visit Our Other Blogs
Engineering
Facebook AI
Oculus
Tech@
RSS Feed
About
Careers
Privacy
Cookies
Terms
Help
Facebook © 2021
To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookies Policy