Hi. I'm Menglei Chai.

I am a Research Scientist at Snap Inc.
I got my Ph.D. degree from the Graphics & Parallel Systems Lab (GAPS)Zhejiang University in 2017.
Before that, I received my B.S. degree in Computer Science from Zhejiang University in 2011.

Learn about what I am doing

I am doing research in Computer Graphics.

Focus majorly on image manipulation, physical animation & modeling.
My advisor is Professor Kun Zhou.

Curriculum Vitae

Click here to see my CV.


  • Stabilized Real-time Face Tracking via a Learned Dynamic Rigidity Prior,
    Chen Cao, Menglei Chai, Oliver Woodford, and Linjie Luo,
    Siggraph Asia 2018, ACM Transactions on Graphics (TOG).
    Abstruct Bibtex Paper Video

    Despite the popularity of real-time monocular face tracking systems in many successful applications, one overlooked problem with these systems is rigid instability. It occurs when the input facial motion can be explained by either head pose change or facial expression change, creating ambiguities that often lead to jittery and unstable rigid head poses under large expressions. Existing rigid stabilization methods either employ a heavy anatomically-motivated approach that are unsuitable for real-time applications, or utilize heuristic-based rules that can be problematic under certain expressions. We propose the first rigid stabilization method for real-time monocular face tracking using a dynamic rigidity prior learned from realistic datasets. The prior is defined on a region-based face model and provides dynamic region-based adaptivity for rigid pose optimization during real-time performance. We introduce an effective offline training scheme to learn the dynamic rigidity prior by optimizing the convergence of the rigid pose optimization to the ground-truth poses in the training data. Our real-time face tracking system is an optimization framework that alternates between rigid pose optimization and expression optimization. To ensure tracking accuracy, we combine both robust, drift-free facial landmarks and dense optical flow into the optimization objectives. We evaluate our system extensively against state-of-the-art monocular face tracking systems and achieve significant improvement in tracking accuracy on the high-quality face tracking benchmark. Our system can improve facial-performance-based applications such as facial animation retargeting and virtual face makeup with accurate expression and stable pose. We further validate the dynamic rigidity prior by comparing it against other variants on the tracking accuracy.
  • A Data-Driven Approach to Four-View Image-Based Hair Modeling ,
    Meng Zhang, Menglei Chai, Hongzhi Wu, Hao Yang, and Kun Zhou,
    Siggraph 2017, ACM Transactions on Graphics (TOG).
    Abstruct Bibtex Paper Video

    We introduce a novel four-view image-based hair modeling method. Given four hair images taken from the front, back, left and right views as input, we first estimate the rough 3D shape of the hair observed in the input using a predefined database of 3D hair models, then synthesize a hair texture on the surface of the shape, from which the hair growing direction information is calculated and used to construct a 3D direction field in the hair volume. Finally, we grow hair strands from the scalp, following the direction field, to produce the 3D hair model, which closely resembles the hair in all input images. Our method does not require that all input images are from the same hair, enabling an effective way to create compelling hair models from images of considerably different hairstyles at different views. We demonstrate the efficacy of our method using a wide range of examples.
  • AutoHair: Fully Automatic Hair Modeling from a Single Image ,
    Menglei Chai, Tianjia Shao, Hongzhi Wu, Yanlin Weng, and Kun Zhou,
    Siggraph 2016 (Spotlight Paper), ACM Transactions on Graphics (TOG).
    Abstruct Bibtex Paper Video

    We introduce AutoHair, the first fully automatic method for 3D hair modeling from a single portrait image, with no user interaction or parameter tuning. Our method efficiently generates complete and high-quality hair geometries, which are comparable to those generated by the state-of-the-art methods, where user interaction is required. The core components of our method are: a novel hierarchical deep neural network for automatic hair segmentation and hair growth direction estimation, trained over an annotated hair image database; and an efficient and automatic data-driven hair matching and modeling algorithm, based on a large set of 3D hair exemplars. We demonstrate the efficacy and robustness of our method on Internet photos, resulting in a database of around 50K 3D hair models and a corresponding hairstyle space that covers a wide variety of real-world hairstyles. We also show novel applications enabled by our method, including 3D hairstyle space navigation and hair-aware image retrieval.
  • High-Quality Hair Modeling from a Single Portrait Photo,
    Menglei Chai, Linjie Luo, Kalyan Sunkavalli, Nathan Carr, Sunil Hadap, and Kun Zhou,
    Siggraph Asia 2015, ACM Transactions on Graphics (TOG).
    Abstruct Bibtex Paper Video Program

    We propose a novel system to reconstruct a high-quality hair depth map from a single portrait photo with minimal user input. We achieve this by combining depth cues such as occlusions, silhouettes, and shading, with a novel 3D helical structural prior for hair reconstruction. We fit a parametric morphable face model to the input photo and construct a base shape in the face, hair and body regions using occlusion and silhouette constraints. We then estimate the normals in the hair region via a Shape-from-Shading based optimization that uses the lighting inferred from the face model and enforces an adaptive albedo prior that models the typical color and occlusion variations of hair. We introduce a 3D helical hair prior that captures the geometric structure of hair, and show that it can be robustly recovered from the input photo in an automatic manner. Our system combines the base shape, the normals estimated by Shape from Shading, and the 3D helical hair prior to reconstruct high-quality 3D hair models. Our single-image reconstruction closely matches the results of a state-of-the-art multiview stereo applied on a multi-view stereo dataset. Our technique can reconstruct a wide variety of hairstyles ranging from short to long and from straight to messy, and we demonstrate the use of our 3D hair models for high-quality portrait relighting, novel view synthesis and 3D-printed portrait reliefs.
  • A Reduced Model for Interactive Hairs,
    Menglei Chai, Changxi Zheng, and Kun Zhou,
    Siggraph 2014, ACM Transactions on Graphics (TOG).
    Abstruct Bibtex Paper Video Project Page

    Realistic hair animation is a crucial component in depicting virtual characters in interactive applications. While much progress has been made in high-quality hair simulation, the overwhelming computation cost hinders similar fidelity in realtime simulations. To bridge this gap, we propose a data-driven solution. Building upon precomputed simulation data, our approach constructs a reduced model to optimally represent hair motion characteristics with a small number of guide hairs and the corresponding interpolation relationships. At runtime, utilizing such a reduced model, we only simulate guide hairs that capture the general hair motion and interpolate all rest strands. We further propose a hair correction method that corrects the resulting hair motion with a position-based model to resolve hair collisions and thus captures motion details. Our hair simulation method enables a simulation of a full head of hairs with over 150K strands in realtime. We demonstrate the efficacy and robustness of our method with various hairstyles and driven motions (e.g., head movement and wind force), and compared against full simulation results that does not appear in the training data.
  • Dynamic Hair Manipulation in Images and Videos,
    Menglei Chai, Lvdi Wang, Yanlin Weng, Xiaogang Jin, and Kun Zhou,
    Siggraph 2013, ACM Transactions on Graphics (TOG).
    Abstruct Bibtex Paper Video Project Page

    This paper presents a single-view hair modeling technique for generating visually and physically plausible 3D hair models with modest user interaction. By solving an unambiguous 3D vector field explicitly from the image and adopting an iterative hair generation algorithm, we can create hair models that not only visually match the original input very well but also possess physical plausibility (e.g., having strand roots fixed on the scalp and preserving the length and continuity of real strands in the image as much as possible). The latter property enables us to manipulate hair in many new ways that were previously very difficult with a single image, such as dynamic simulation or interactive hair shape editing. We further extend the modeling approach to handle simple video input, and generate dynamic 3D hair models. This allows users to manipulate hair in a video or transfer styles from images to videos.
  • Single-View Hair Modeling for Portrait Manipulation,
    Menglei Chai, Lvdi Wang, Yanlin Weng, Yizhou Yu, Baining Guo, and Kun Zhou,
    Siggraph 2012, ACM Transactions on Graphics (TOG).
    Abstruct Bibtex Paper Video Project Page

    Human hair is known to be very difficult to model or reconstruct. In this paper, we focus on applications related to portrait manipulation and take an application-driven approach to hair modeling. To enable an average user to achieve interesting portrait manipulation results, we develop a single-view hair modeling technique with modest user interaction to meet the unique requirements set by portrait manipulation. Our method relies on heuristics to generate a plausible high-resolution strand-based 3D hair model. This is made possible by an effective high-precision 2D strand tracing algorithm, which explicitly models uncertainty and local layering during tracing. The depth of the traced strands is solved through an optimization, which simultaneously considers depth constraints, layering constraints as well as regularization terms. Our single-view hair modeling enables a number of interesting applications that were previously challenging, including transferring the hairstyle of one subject to another in a potentially different pose, rendering the original portrait in a novel view and image-space hair editing.
  • Adaptive Skinning for Interactive Hair-Solid Simulation,
    Menglei Chai, Changxi Zheng, and Kun Zhou,
    IEEE Transactions on Visualization and Computer Graphics (TVCG) 2016.
    Abstruct Bibtex Paper Video

    Reduced hair models have proven successful for interactively simulating a full head of hair strands, building upon a fundamental assumption that only a small set of guide hairs are needed for explicit simulation, and the rest of the hair move coherently and thus can be interpolated using guide hairs. Unfortunately, hair-solid interactions is a pathological case for traditional reduced hair models, as the motion coherence between hair strands can be arbitrarily broken by interacting with solids. In this paper, we propose an adaptive hair skinning method for interactive hair simulation with hair-solid collisions. We precompute many eligible sets of guide hairs and the corresponding interpolation relationships that are represented using a compact strand-based hair skinning model. At runtime, we simulate only guide hairs; for interpolating every other hair, we adaptively choose its guide hairs, taking into account motion coherence and potential hair-solid collisions. Further, we introduce a two-way collision correction algorithm to allow sparsely sampled guide hairs to resolve collisions with solids that can have small geometric features. Our method enables interactive simulation of more than 150K hair strands interacting with complex solid objects, using 400 guide hairs. We demonstrate the efficiency and robustness of the method with various hairstyles and user-controlled arbitrary hair-solid interactions.
  • Cone Tracing for Furry Object Rendering,
    Hao Qin, Menglei Chai, Qiming Hou, Zhong Ren, and Kun Zhou,
    IEEE Transactions on Visualization and Computer Graphics (TVCG) 2014.
    Abstruct Bibtex Paper Video

    We present a cone-based ray tracing algorithm for high-quality rendering of furry objects with reflection, refraction and defocus effects. By aggregating many sampling rays in a pixel as a single cone, we significantly reduce the high supersampling rate required by the thin geometry of fur fibers. To reduce the cost of intersecting fur fibers with cones, we construct a bounding volume hierarchy for the fiber geometry to find the fibers potentially intersecting with cones, and use a set of connected ribbons to approximate the projections of these fibers on the image plane. The computational cost of compositing and filtering transparent samples within each cone is effectively reduced by approximating away in-cone variations of shading, opacity and occlusion. The result is a highly efficient ray tracing algorithm for furry objects which is able to render images of quality comparable to those generated by alternative methods, while significantly reducing the rendering time. We demonstrate the rendering quality and performance of our algorithm using several examples and a user study.
  • As-Rigid-As-Possible Distance Field Metamorphosis,
    Yanlin Weng, Menglei Chai, Weiwei Xu, Yiying Tong, and Kun Zhou,
    Pacific Graphics 2013, Computer Graphics Forum (CGF).
    Abstruct Bibtex Paper Video

    Widely used for morphing between objects with arbitrary topology, distance field interpolation (DFI) handles topological transition naturally without the need for correspondence or remeshing, unlike surface-based interpolation approaches. However, lack of correspondence in DFI also leads to ineffective control over the morphing process. In particular, unless the user specifies a dense set of landmarks, it is not even possible to measure the distortion of intermediate shapes during interpolation, let alone control it. To remedy such issues, we introduce an approach for establishing correspondence between the interior of two arbitrary objects, formulated as an optimal mass transport problem with a sparse set of landmarks. This correspondence enables us to compute non-rigid warping functions that better align the source and target objects as well as to incorporate local rigidity constraints to perform as-rigid-as-possible DFI. We demonstrate how our approach helps achieve flexible morphing results with a small number of landmarks.
  • Hair Interpolation for Portrait Morphing,
    Yanlin Weng, Lvdi Wang, Xiao Li, Menglei Chai, and Kun Zhou,
    Pacific Graphics 2013, Computer Graphics Forum (CGF).
    Abstruct Bibtex Paper Video Project Page

    In this paper we study the problem of hair interpolation: given two 3D hair models, we want to generate a sequence of intermediate hair models that transform from one input to another both smoothly and aesthetically pleasing. We propose an automatic method that efficiently calculates a many-to-many strand correspondence between two or more given hair models, taking into account the multi-scale clustering structure of hair. Experiments demonstrate that hair interpolation can be used for producing more vivid portrait morphing effects and enabling a novel example-based hair styling methodology, where a user can interactively create new hairstyles by continuously exploring a “style space” spanning multiple input hair models.
  • Parametric Weight-change Reshaping for Portrait Images,
    Haiming Zhao, Xiaogang Jin, Xiaojian Huang, Menglei Chai, and Kun Zhou,
    Computer Graphics and Applications 2018.
    Abstruct Bibtex Paper Video

    We present an easy-to-use parametric image retouching method for thinning or fattening a face in a single portrait image while maintaining a close similarity to the source image. First, our method reconstructs a 3D face from the input face image using a morphable model. Second, according to the linear regression equation derived from the depth statistics of the soft tissue in the face and the user-set parameters of reshaping degree, we calculate the new positions of the feature points. Third, the Laplacian deformation method is employed to calculate the deformed positions of non-feature points in the 3D face model. Finally, we seamlessly blend the projected reshaped face region in 2D image with the background using image retargeting method based on mesh parametrization. Our model-based reshaping process can achieve globally consistent editing effects without noticeable artifacts. The effectiveness of our algorithm is demonstrated by experiments and user study.
  • Surface Mesh Controlled Fast Hair Modeling,
    Menglei Chai, Yanlin Weng, Qiming Hou, and Zhong Ren,
    Journal of Computer-Aided Design & Computer Graphics 2012.
    Abstruct Paper (in Chinese)

    We propose a fast hair modeling method based on polygonal surface mesh editing to reduce the complexity of current hair modeling systems. First, coarse surface meshes are created to represent the overall shapes of the target hair models. Then, parameterizations are computed for these meshes to guide the strand directions. Finally, a set of hair steam-lines are generated in space to refine the final strands automatically in order to fit the expected shapes while still preserving the particular hair style effects. Experiment results show that this method can produce high-quality hair models with relatively simple mesh modeling operations. This greatly simplifies the hair modeling procedure and enhances users' ability to control the final hair shape. Furthermore, this method can be easily integrated with physical-based hair simulation technique to produce realistic hair animations.


Send me e-mail to cmlatsim at gmail dot com.
Or find me at #405, Infomation & Control Building, Zijin'gang Campus, Zhejiang University.