This project presents a method for automatically generating 3D models of the human head using two images as input – a front view and a side view of the subject. Key facial landmarks are identified on the images, then used to translate corresponding vertices of a generic head model to match the images.
The goal of this project was to provide a tool for artists that would offer a head start in character modeling for animated films or games; therefore, the final program was developed as a plugin for Autodesk Maya software, the industry standard for modeling and animation.
This program was developed for my Master’s project in Computer Science at UCLA.