BlendFields: Few-Shot Example-Driven Facial Modeling
- Kacper Kania ,
- Stephan J. Garbin ,
- Andrea Tagliasacchi ,
- Virginia Estellers ,
- Kwang Moo Yi ,
- Julien Valentin ,
- Tomasz Trzcinski ,
- Marek Kowalski (HE/HIM)
Generating faithful visualizations of human faces requires capturing both coarse and fine-level details of the face geometry and appearance. Existing methods are either data-driven, requiring an extensive corpus of data not publicly accessible to the research community, or fail to capture fine details because they rely on geometric face models that cannot represent fine-grained details in texture with a mesh discretization and linear deformation designed to model only a coarse face geometry. We introduce a method that bridges this gap by drawing inspiration from traditional computer graphics techniques. Unseen expressions are modeled by blending appearance from a sparse set of extreme poses. This blending is performed by measuring local volumetric changes in those expressions and locally reproducing their appearance whenever a similar expression is performed at test time. We show that our method generalizes to unseen expressions, adding fine-grained effects on top of smooth volumetric deformations of a face, and demonstrate how it generalizes beyond faces.