Page tree

Versions Compared


  • This line was added.
  • This line was removed.
  • Formatting was changed.


The scenario is verification of a live face image against the image of a face from an ID. It is done in four main steps:

  • Locate the face in each of the two images;

  • Extract the face descriptor from each of the two faces;

  • Compare the two descriptors to obtain the similarity value;

  • Compare the similarity value to a chosen threshold, resulting in a match or non-match.

These steps are described here with further detail:

  • In each of the two images (live face and ID image), the face first need needs to be located:

    • To locate the face, you can use detection (for a single image) or tracking (for a series of images from a camera feed).

      • See function VisageSDK::VisageTracker::track() or VisageFeaturesDetector::detectFacialFeatures().

    • Each of these functions returns the number of faces in the image - if there is not exactly one face you may report an error or take other action.

    • Furthermore, these functions return the FaceData structure for each detected face, containing the face location.

    • Note: the ID image should be cropped so that the ID is occupying most of the image (if the face on the ID is too small relative to the whole image it might not be detected).

  • The next step is to extract a face descriptor from each image. The descriptor is an array of short integers that describes the face - similar faces will have similar descriptors.

    • From the previous step you have one FaceData structure for the ID image and one FaceData structure for the live image.

    • Pass each image with its corresponding FaceData to the function VisageFaceRecognition::extractDescriptor().

  • Pass the two descriptors to the function VisageFaceRecognition::descriptorsSimilarity() to compare the two descriptors to each other and obtain the measure of their similarity. This is a float value between 0 (no similarity) and 1 (perfect similarity).

  • If the similarity is greater than a chosen threshold, consider that the live face matches the ID face.

    • By choosing the threshold, you control the trade-off between False Positives and False Negatives:

      • If the threshold is very high, there will be virtually no False Positives, i.e. the system will never declare a correct match when in reality the live person is not the person in the ID.

      • However, with a very high threshold a False Negative may happen more often - not matching a person who really is the same as in the ID, resulting in an alert that will need to be handled in an appropriate way (probably requiring human intervention).

      • Conversely, with a very low threshold, such “false alert” will virtually never be raised, but the system may then fail to detect True Negatives - the cases when the live person really does not match the ID.

      • There is no “correct” threshold, because it depends on the priority of a specific application. If the priority is to avoid false alerts, threshold may be lower; if the priority is to avoid undetected non-matches then the threshold should be higher.