Data-driven Approaches for Personalized Head Reconstruction
MetadataShow full item record
Personalized 3D face reconstruction has produced exciting results over the past few years. However, traditional methods usually require complicated setups or controlled environments to get the detailed shape of a person's face. Most methods focus solely on the face area and mask out the hair due to the non-rigid nature and complicated layer structure of hairstyles. In this work, we explore data-driven approaches to reconstruct a person's 3D face or head including the hair from the devices that can be easily accessed by everyone.The first part of our work introduces an algorithm that takes a single frame of a person's face from a commercial depth camera Kinect and produces a high-resolution 3D mesh of the input leveraging a large research dataset of 3D face meshes. We divide the input depth frame into semantically significant regions (eyes, nose, mouth, cheeks) and search the database for the best matching shape per region. We further combine the input depth frame with the matched database shapes into a single mesh that results in a high-resolution shape of the input person. In order to free people from the capturing session, the larger portion of this thesis focuses on reconstructing not only the face, but also the rest of the head using in-the-wild image collections and videos. We first introduce a boundary-value growing algorithm to model a person's head from the person's large collection of photo data. We target reconstruction of the rough shape of the head. Our method is to gradually "grow" the head mesh starting from the frontal face and extending to the rest of the views using photometric stereo constraints. Results on photos of celebrities downloaded from the Internet are given. However, in this algorithm, we have not reconstructed a complete head model and a specific model of the hair is lacked. We further utilize a person's in-the-wild video to recover the full head model considering the multi-view information and hairstyle consistency across video frames. Given a video of a person's head, e.g., a TV interview, our method automatically reconstructs a 3D hair model leveraging a 3D hairstyle database. The resultant 3D hair model can be later deformed to change the hair shape, to make it brighter or darker. Our head reconstruction also includes facial modeling from the video, which is used to combine with the hair model. The method is completely automatic and requires as input only a single video taken "in the wild", found as is on the web or a selfie video taken by a smart phone. We demonstrate the capability of our method on a variety of celebrity videos and selfie videos, as well as comparing to the state of the art.