Page tree

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Table of Contents

...

General questions

What skills do I need in order to use visage|SDK?

...

Note

This is a reference to the deprecated offline documentation. It needs to be replaced with new instructions on how to obtain SDK version.

Image Modified

Info

See also:

...


visage|SDK 8.4 for rPI can be downloaded from the following link: https://www.visagetechnologies.com/downloads/visageSDK-rPI-linux_v8.4.tar.bz2
Once you unpack it, you will find the documentation in the root folder. It will guide you through the API and the available sample projects.

...

Important notes:

  • Because of very low demand we currently provide visage|SDK for rPI on-demand only.

  • The package you have received is visage|SDK 8.4. The latest release is visage|SDK 8.7 but that is not available for rPI yet. If your initial tests prove interesting, we will need to discuss the possibility to build the latest version on-demand for you. visage|SDK 8.7 provides better performance and accuracy, but the API is mostly unchanged so you can run relevant initial tests.

  • visage|SDK 8.4 for rPI has been tested with rPI3b+; it should work with rPI4 but we have not tested that.

Important notes:

  • Because of very low demand we currently provide visage|SDK for rPI on-demand only.

  • The package you have received is visage|SDK 8.4. The latest release is visage|SDK 8.7 but that is not available for rPI yet. If your initial tests prove interesting, we will need to discuss the possibility to build the latest version on-demand for you. visage|SDK 8.7 provides better performance and accuracy, but the API is mostly unchanged so you can run relevant initial tests.

  • visage|SDK 8.4 for rPI has been tested with rPI3b+; it should work with rPI4 but we have not tested that.

Will visage|SDK for HTML5 work in browsers on smartphones?

...

Assuming that you have an image and an ID (name, number or similar) for each person, you register each person by storing their face descriptor into a gallery (database). For each person, the process is as follows:

  • Locate the face in the image:

    • To locate the face, you can use detection (for a single image) or tracking (for a series of images from a camera feed).

      • See function VisageSDK::VisageTracker::track() or VisageFeaturesDetector::detectFacialFeatures().

    • Each of these functions returns the number of faces in the image - if there is not exactly one face, you may report an error or take other actions.

    • Furthermore, these functions return the FaceData structure for each detected face, containing the face location.

  • Use VisageFaceRecognition.AddDescriptor() to get the face descriptor and add it to the gallery of known faces together with the name or ID of the person.

    • The descriptor is an array of short integers that describes the face - similar faces will have similar descriptors.

    • The gallery is simply a database of face descriptors, each with an attached ID.

      • Note that you could store the descriptors in your own database, without using the provided gallery implementation.

  • Save the gallery using VisageFaceRecognition.SaveGallery().

...

An this stage, you match a new facial image (for example, a person arriving at a gate, reception, control point or similar) against the previously stored gallery, and obtain IDs of one or more most similar persons registered in the gallery.

  • First, locate the face(s) in the new image.

    • The steps are the same as explained above in the Registration part. You obtain a FaceData structure for each located face.

  • Pass the FaceData to VisageFaceRecognition.ExtractDescriptor() to get the face descriptor of the person.

  • Pass this descriptor to VisageFaceRecognition.Recognize(), which will match it to all the descriptors you have previously stored in the gallery and return the name/ID of the most similar person (or a desired number of most similar persons);

    • the Recognize() function also returns a similarity value, which you may use to cut off the false positives.

...

  • In each of the two images (live face and ID image), the face first needs to be located:

    • To locate the face, you can use detection (for a single image) or tracking (for a series of images from a camera feed).

      • See function VisageSDK::VisageTracker::track() or VisageFeaturesDetector::detectFacialFeatures().

    • Each of these functions returns the number of faces in the image - if there is not exactly one face, you may report an error or take other actions.

    • Furthermore, these functions return the FaceData structure for each detected face, containing the face location.

    • Note: the ID image should be cropped so that the ID is occupying most of the image (if the face on the ID is too small relative to the whole image it might not be detected).

  • The next step is to extract a face descriptor from each image. The descriptor is an array of short integers that describes the face. Similar faces will have similar descriptors.

    • From the previous step, you have one FaceData structure for the ID image and one FaceData structure for the live image.

    • Pass each image with its corresponding FaceData to the function VisageFaceRecognition::extractDescriptor().

  • Pass the two descriptors to the function VisageFaceRecognition::descriptorsSimilarity() to compare the two descriptors to each other and obtain the measure of their similarity. This is a float value between 0 (no similarity) and 1 (perfect similarity).

  • If the similarity is greater than the chosen threshold, consider that the live face matches the ID face.

    • By choosing the threshold, you control the trade-off between False Positives and False Negatives:

      • If the threshold is very high, there will be virtually no False Positives, i.e. the system will never declare a correct match when, in reality, the live person is not the person in the ID.

      • However, with a very high threshold, a False Negative may happen more often - not matching a person who really is the same as in the ID, resulting in an alert that will need to be handled in an appropriate way (probably requiring human intervention).

      • Conversely, with a very low threshold, such “false alert” will virtually never be raised, but the system may then fail to detect True Negatives - the cases when the live person really does not match the ID.

      • There is no “correct” threshold, because it depends on the priority of a specific application. If the priority is to avoid false alerts, threshold may be lower; if the priority is to avoid undetected non-matches, then the threshold should be higher.

...

The mesh uses static texture coordinates so it is fairly simple to replace the texture image and use other themes instead of the tiger mask. We provide the texture image in a form that makes it fairly easy to create other textures in Photoshop, and use these textures as a face mask. This is the template texture file (jk_300_textureTemplate.png) found in visageSDK\Samples\data\ directory. You can simply create a texture image with facial features (mouth, nose etc.) placed according to the template image, and use this texture instead of the tiger. You can modify texture in Showcase Demo sample by changing the texture file which is set in line 331 of ShowcaseDemo.xaml.cs source file:

...

The error you received indicates that the server has rejected the license. Please get in touch with your Visage Technologies contact person, who will investigate the matter further and get back to you.

Info

[Internal - for Visage Technologies contact person] - Check on the VTLS server for the reason why license was rejected. It can be one of the following (the most common being -6):

  • -1 - License Key sent to VTLS server for validation was malformed

  • -2 - Device ID sent to VTLS server for validation was malformed

  • -3 - License does not exist on VTLS server

  • -4 - License is blocked on VTLS server

  • -5 - License expired

  • -6 - Installations limit reached

  • -7 - Concurrent users limit reached

...