Visage|SDK is divided into three specialized packages. They can be licensed separately, or combined together. This page provides a brief overview of each package.
In this page:
FaceTrack detects and tracks one or more faces and their facial features in images and videos from any standard camera or video file in color, grayscale and near-infrared. For each detected face it returns detailed face data including: 2D and 3D head pose and facial points coordinates (chin tip, nose tip, lip corners etc.), a set of action units describing the current facial expressions (e.g. jaw drop), eye closure and eye-gaze information, and 3D triangle mesh model of the face in the current pose and expression.
Gaze tracking tells you where a person is looking. Combined with face analysis, it can help measure visual attention and the influence of specific content on emotions, providing valuable data for marketing research, user studies, commercial testing and more.
FaceAnalysis estimates people’s gender, age and emotions in real time. This helps you build engaging, personalized experiences.
FaceAnalysis takes into account facial landmarks such as the location of the pupils, eye corners, lip boundaries, etc. to estimate age. Emotion estimation returns the probability distribution of each of the six universal emotions: happiness, sadness, anger, fear, surprise, and disgust, and additionally neutral.
FaceRecognition is a robust and scalable face recognition solution that provides quick and accurate results for surveillance, identification and identity verification.
FaceRecognition measures the similarity between faces and finds the best match. It can help you protect your data, get closer to your target audience or simply improve the user experience of your clients.
Our face recognition is based on face descriptors. All biometric templates are mathematical representations of users’ faces, which means that biometric and personal information is strictly separated. This way of handling data ensures the highest level of privacy, even when dealing with extremely sensitive data. The system calculates the similarity between the input face descriptor and all face descriptors previously stored in a gallery. The goal is to find the face(s) from the gallery that are the most similar to the input face.
Summarize key points, dates, and deliverables. To change the style of this panel, select one of the options in the menu below.