Page tree

Versions Compared


  • This line was added.
  • This line was removed.
  • Formatting was changed.


In this page:

Table of Contents
excludeIn this page


FaceTrack detects and tracks one or more faces and their facial features in images and videos from any standard camera or video file in color, grayscale and near-infrared. For each detected face it returns detailed face data including: 2D and 3D head pose and facial points coordinates (chin tip, nose tip, lip corners etc.), a set of action units describing the current facial expressions (e.g. jaw drop), eye closure and eye-gaze information, and 3D triangle mesh model of the face in the current pose and expression.

Gaze tracking tells you where a person is looking. Combined with face analysis, it can help measure visual attention and the influence of specific content on emotions, providing valuable data for marketing research, user studies, commercial testing and more.





FaceAnalysis estimates people’s gender, age and emotions in real time. This helps you build engaging, personalized experiences.


Our face recognition is based on face descriptors. All biometric templates are mathematical representations of users’ faces, which means that biometric and personal information is strictly separated. This way of handling data ensures the highest level of privacy, even when dealing with extremely sensitive data. The system calculates the similarity between the input face descriptor and all face descriptors previously stored in a gallery. The goal is to find the face(s) from the gallery that are the most similar to the input face.






Summarize key points, dates, and deliverables. To change the style of this panel, select one of the options in the menu below.