Active Appearence Model Face Tracking: PyFaceTracker

Using a “Non-rigid/deformable model” the user is able to create an exceptionally stable and smooth face tracking environment. The triangulation and connections are formatted using the 61 landmarks corresponding to an x and y coordinate that are detected. In the video below the 3D model of the facial landmarks is plotted using matplot. The plot can be shifted to view the form of the face that has been mapped. The actual landmark detection includes triangulation and connections that can be used for distance measurement and expression recognition. The pyFaceTracker includes a video based live landmarking detection system that can be used for continuous recognition. The x and y points can be streamed live as well similar to the stasm approach that has been done in recent posts. The technology behind the pyFaceTracker differs from the Active Shape Model back end of STASM as it utilizes what is known as an Active Appearance Model. The difference between the two can be noticed upon the live usage. In the active shape model which is the integral part of the STASM library,, the active shape model uses patches of the face known as landmarks that are independent of each other to find the individual coordinates. The active appearance model on the other hand creates a full texture map which the landmarks tend to morph into as a whole. Thus, the user can notice the edges of the face actually deforming to fit and model the users face. On the other hand, the user can notice that STASM has a more rigid definition in the live landmarking program that lacks the ability to dynamically alter the landmarks in certain positions. Nevertheless, even the pyFaceTracker falls short for task such as frowning for the “sad” expression, and STASM’s lack of an AAM backend should not undermine the power and robust nature of the program to detect facial landmarks at a relatively efficient pace.

Building

To build the pyFaceTracker download the program from the python repository linked here and extract to your working directory. The program’s default setup.py is built for the Windows OS, so it is necessary for you to replace it with the setup.py that I have added to github under landmarking in the source directory here. The setup.py file will require you to change the directories of the opencv and pyFaceTracker to your respective directories. Use this to perform a

sudo python setup.py install

in your working directory. If you run into any errors with the building it is most likely due to the location of your shared libraries.

Next, use the face_video.py file to run the pyFaceTracker program.

 python face_video.py

It may fail in low light, calling for the user to change the position of the laptop.

pyFaceTracker2pyFaceTracker