Research Area:  Machine Learning
Many role-playing games feature character creation systems where players are allowed to edit the facial appearance of their in-game characters. This paper proposes a novel method to automatically create game characters based on a single face photo. We frame this “artistic creation” process under a self-supervised learning paradigm by leveraging the differentiable neural rendering. Considering the rendering process of a typical game engine is not differentiable, an “imitator” network is introduced to imitate the behavior of the engine so that the in-game characters can be smoothly optimized by gradient descent in an end-to-end fashion. Different from previous monocular 3D face reconstruction which focuses on generating 3D mesh-grid and ignores user interaction, our method produces fine-grained facial parameters with a clear physical significance where users can optionally fine-tune their auto-created characters by manually adjusting those parameters. Experiments on multiple large-scale face datasets show that our method can generate highly robust and vivid game characters. Our method has been applied to two games and has now provided over 10 million times of online services.
Keywords:  
Author(s) Name:   Tianyang Shi; Zhengxia Zou; Zhenwei Shi; Yi Yuan
Journal name:  IEEE Transactions on Pattern Analysis and Machine Intelligence
Conferrence name:  
Publisher name:  IEEE
DOI:  10.1109/TPAMI.2020.3024009
Volume Information:  Volume: 44, Issue: 3, 01 March 2022, Page(s): 1489 - 1502
Paper Link:   https://ieeexplore.ieee.org/abstract/document/9197693