Page tree
Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 8 Next »

General questions

Am I entitled to receive technical support?

  • Most our licenses include initial 5 hours of support, so if you have purchased a license for visage|SDK, then you can use this support (delivered via email).

  • For the majority of our clients the initial support hours are more than sufficient, but it is also possible to order additional support - your contact person can advise you on this.

  • If you are evaluating visage|SDK and have technical issues, we will do our best, within reasonable limits, to support your evaluation.

How to request technical support?

If you have a technical issue using visage|SDK, please email your Visage Technologies contact person the following information:

  • error report, including the error messages you receive and anything else you think may help our team resolve the issue,

  • the operating system you are using,

  • the version of visage|SDK (please see it in the upper left corner of the documentation as in the image below).

This is a reference to the deprecated offline documentation. It needs to be replaced with new instructions how to obtain SDK version.

Languages, platforms, tools etc.

Can I use visage|SDK with Unity?

visage|SDK packages for Windows, iOS, Android, MAC OS X and HTML5 each provide Unity integration including sample Unity projects with full source code. For more details, please look in the documentation.html which can be found in the root folder of every package of visage|SDK. Specifically, in the documentation, if you click on Samples, then Unity3D, you will find information about Unity integration and relevant sample projects.

This FAQ entry contains pointers to the visage|SDK offline documentation which is deprecated. These pointers need to be replaced with links to online documentation.

High-level functionalities

How do I perform liveness detection with visage|SDK?

visage|SDK includes active liveness detection: the user is required to perform a simple facial gesture (smile, blink or eyebrow raising) and face tracking is then used to verify that the gesture is actually performed. You can configure which gesture(s) you want to include. As app developer you also need to take care of displaying appropriate messages to the user.

All visage|SDK packages include the API for liveness detection. However, only the visage|SDK for Windows and visage|SDK for Android contain a ready-to-run demo of Liveness Detection. So, for a quick test of the liveness detection function it would probably be the easiest to downoad visage|SDK for Windows, run “DEMO_FaceTracker2.exe” and select “Perform Liveness” from the Liveness menu.

The technical demos in Android and Windows packages of visage|SDK include the source code intended to help you integrate liveness detection into your own application.

Need to insert links to the documentation of these sample projects and the the Liveness API.

How do I perform identification of a person from a database?

These guidelines are written for a use case of identifying a student from a database of students. They can easily be used for other cases, such as employees.

The main steps involved in implementing the identification process are registration and identification, as follows.

  • Register all students in a school, let’s say 2000 of them, by doing the following for each (presuming you have their images):

    • run face tracker on the image to find the face (VisageTracker.Track()) and obtain the FaceData;

    • use VisageFaceRecognition.AddDescriptor () to get the face descriptor and add it to the gallery of known faces together with the name or ID of the student;

    • save the galery using VisageFaceRecognition.SaveGallery().

  • Then, for each person arriving at the identification point (gate, POS etc.):

    • run face tracker on live camera image to find the face (VisageTracker.Track()) and obtain the FaceData;

    • pass FaceData to VisageFaceRecognition.ExtractDescriptor() to get the face descriptor of the person;

    • pass this descriptor to VisageFaceRecognition.Recognize(), which will match it to all the descriptors you have previously stored in the gallery and return the name of the most similar student;

    • the Recognize() function also returns a similarity value, which you may use to cut off the false positives.

Need to insert links to relevant API parts in text and/or as “See also” section.

How do I perform verification of a live face vs. ID photo?

The scenario is verification of a live face image against the image of a face from an ID.

  • In each of the two images (live face and ID image), there should be only one face, clearly visible.

    • If that is not the case, e.g. if there is no face or there are two or more faces, you can use visage|SDK to detect this and report an error.

    • The ID image should be cropped so that the ID is occupying most of the image.

  • visage|SDK face tracking API will be used to detect the face in each image.

  • visage|SDK face recognition API will then be used to generate a numerical descriptor for each of the two faces, to compare the two numerical descriptors to each other and return a measure of their similarity.

  • visage|SDK provides functions to do all the above steps, but you need to use these functions to program the integrated application.

Need to insert links to relevant API parts in text and/or as “See also” section.

  • No labels