GeneAvatar: Generic Expression-Aware Volumetric Head Avatar Editing from a Single Image

  • Chong Bao ,
  • Yinda Zhang ,
  • Yuan Li ,
  • Xiyu Zhang ,
  • Bangbang Yang ,
  • Hujun Bao ,
  • ,
  • Guofeng Zhang ,
  • Zhaopeng Cui

CVPR 2024 |

Publication | Publication

Recently, we have witnessed the explosive growth of various volumetric representations in modeling animatable head avatars. However, due to the diversity of frameworks, there is no practical method to support high-level applications like 3D head avatar editing across different representations. In this paper, we propose a generic avatar editing approach that can be universally applied to various 3DMM driving volumetric head avatars. To achieve this goal, we design a novel expression-aware modification generative model, which enables lift 2D editing from a single image to a consistent 3D modification field. To ensure the effectiveness of the generative modification process, we develop several techniques, including an expression-dependent modification distillation scheme to draw knowledge from the large-scale head avatar model and 2D facial texture editing tools, implicit latent space guidance to enhance model convergence, and a segmentation-based loss reweight strategy for fine-grained texture inversion. Extensive experiments demonstrate that our method delivers high-quality and consistent results across multiple expression and viewpoints. Project page: https://zju3dv.github.io/geneavatar/