Page tree

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Our face tracking algorithm is among the best ones available, but like all Computer Vision algorithms it has its limits related to image quality, light conditions, occlusions or specific reasons such as head pose.

If you notice specific issues or have special requirements, you may send us your test video footage and any specific requests, and we will process it and send you back the tracking results. This may can allow us to fine-tune the tracker configuration to your specific requirements and send you the best possible results.

...

The fastest way to test ear tracking is in the online Showcase Demo (https://visagetechnologies.com/demo/); simply . Simply enable Ears in the Draw Options menu.

...

visage|SDK includes active liveness detection: the . The user is required to perform a simple facial gesture (smile, blink or eyebrow raising) and face raise eyebrows). Face tracking is then used to verify that the gesture is actually performed. You can configure which gesture(s) you want to include. As the app developer, you also need to take care of displaying appropriate messages to the user.

All visage|SDK packages include the API for liveness detection. However, only the visage|SDK for Windows and visage|SDK for Android contain a ready-to-run demo of Liveness Detection. So, for a quick test of the liveness detection function, it would probably be the easiest to downoad visage|SDK for Windows, run “DEMO_FaceTracker2.exe” and select “Perform Liveness” from the Liveness menu.

...

This article outlines the implementation of using face recognition for identifying a person from a database of known people. It may be applied to cases such as whitelists for access control or attendance management, blacklists for alerts, and similar. The main processes involved in implementing this scenario are registration and matching, as follows.

...

  • Locate the face in the image:

    • To locate the face, you can use detection (for a single image) or tracking (for a series of images from a camera feed).

      • See function VisageSDK::VisageTracker::track() or VisageFeaturesDetector::detectFacialFeatures().

    • Each of these functions returns the number of faces in the image - if there is not exactly one face, you may report an error or take other actionactions.

    • Furthermore, these functions return the FaceData structure for each detected face, containing the face location.

  • Use VisageFaceRecognition.AddDescriptor() to get the face descriptor and add it to the gallery of known faces together with the name or ID of the person.

    • The descriptor is an array of short integers that describes the face - similar faces will have similar descriptors.

    • The gallery is simply a database of face descriptors, each with an attached ID.

      • Note that you could store the descriptors in your own database, without using the provided gallery implementation.

  • Save the gallery using VisageFaceRecognition.SaveGallery().

Matching

In An this stage, you match a new facial image (for example, a person arriving at a gate, reception, control point or similar) against the previously stored gallery, and obtain IDs of one or more most similar persons registered in the gallery.

...

Note

Need to insert links to relevant API parts in text and/or as “See also” section.

How do I perform verification of a live face vs. an ID photo?

The scenario is the verification of a live face image against the image of a face from an ID. It This is done in four main steps:

...

  • In each of the two images (live face and ID image), the face first needs to be located:

    • To locate the face, you can use detection (for a single image) or tracking (for a series of images from a camera feed).

      • See function VisageSDK::VisageTracker::track() or VisageFeaturesDetector::detectFacialFeatures().

    • Each of these functions returns the number of faces in the image - if there is not exactly one face, you may report an error or take other actionactions.

    • Furthermore, these functions return the FaceData structure for each detected face, containing the face location.

    • Note: the ID image should be cropped so that the ID is occupying most of the image (if the face on the ID is too small relative to the whole image it might not be detected).

  • The next step is to extract a face descriptor from each image. The descriptor is an array of short integers that describes the face - similar . Similar faces will have similar descriptors.

    • From the previous step, you have one FaceData structure for the ID image and one FaceData structure for the live image.

    • Pass each image with its corresponding FaceData to the function VisageFaceRecognition::extractDescriptor().

  • Pass the two descriptors to the function VisageFaceRecognition::descriptorsSimilarity() to compare the two descriptors to each other and obtain the measure of their similarity. This is a float value between 0 (no similarity) and 1 (perfect similarity).

  • If the similarity is greater than a the chosen threshold, consider that the live face matches the ID face.

    • By choosing the threshold, you control the trade-off between False Positives and False Negatives:

      • If the threshold is very high, there will be virtually no False Positives, i.e. the system will never declare a correct match when, in reality, the live person is not the person in the ID.

      • However, with a very high threshold, a False Negative may happen more often - not matching a person who really is the same as in the ID, resulting in an alert that will need to be handled in an appropriate way (probably requiring human intervention).

      • Conversely, with a very low threshold, such “false alert” will virtually never be raised, but the system may then fail to detect True Negatives - the cases when the live person really does not match the ID.

      • There is no “correct” threshold, because it depends on the priority of a specific application. If the priority is to avoid false alerts, threshold may be lower; if the priority is to avoid undetected non-matches, then the threshold should be higher.

...

How do I determine if a person is looking at the screen?

First, let me say that visage|SDK does not have an out-of-the box option to determine if the person is looking at the screen. However, it should not be too difficult to implement that. What visage|SDK does provide is:

...

Please also note that the estimated gaze direction may be a bit unstable (the gaze vectors appearing “shaky”) due to the difficulty of accurately locating the pupils. At the same time, the 3D head pose (head direction) is much more stable. Because people usually turn their head in the direction where in which they are looking, it may also be interesting to use the head pose as the approximation of the direction of gaze.

...

visage|SDK can be used to locate and track faces in group/crowd images and also to perform face recognition (identity) and face analysis (age, gender, emotion estimation). Such use requires particular care related to performance issues as since there may be many faces to process. Some initial guidelines:

  • Face tracking is limited to 20 faces (for performance reasons). To locate more faces in the image, use face detection (class FacialFeaturesDetector).

  • visage|SDK is capable of detecting/tracking faces whose size in the image is at least 5% of the image width (height in case of portrait images).

    • The default setting for the VisageFeaturesDetector is to detect faces larger than 10% of the image size and 15% in case of the VisageTracker. The default parameter for minimal face scale needs to be modified to process smaller faces.

    • If you are using high resolution images with many faces, so that each face is smaller than 5% of image width, one solution may be to divide the image into portions and process each portion separately; alternatively. Alternatively, a custom version of visage|SDK may be discussed.

  • For optimal performance of algorithms for face recognition and analysis (age, gender, emotion), faces should be at least 100 pixels wide.

...

Our sample projects give you a good starting points point for implementing your own masks and other effects using powerful mainstream graphics applications (such as Blender, Photoshop, Unity 3D and others). Specifically:

...

The mesh uses static texture coordinates so it is fairly simple to replace the texture image and have use other themes instead of the tiger mask. We provide the texture image in a form that makes it fairly easy to create other textures in Photoshop, and use these textures as a face mask: this . This is the template texture file (jk_300_textureTemplate.png) found in visageSDK\Samples\data\ directory. You can simply create a texture image with facial features (mouth, nose etc.) placed according to the template image, and use this texture instead of the tiger. You can modify texture in Showcase Demo sample by changing the texture file which is set in line 331 of ShowcaseDemo.xaml.cs source file:

...

This sample project is based on Unity 3D and . It includes the tiger mask effect , and also a 3D model (glasses) superimposed on the face. Unity 3D is an extremely powerful game/3D engine that gives you much more choices and freedom in implementing your effects, while starting from the basic ones provided in our project. For more information on the sample project, see Samples – Unity 3D – Visage Tracker Unity Demo in the Documentation. Furthermore, the “Animation and AR modeling guide” document, available in the Documentation under the link “References”, explains how to create and import a 3D model to overlay on the face, which may also be of interest for to you.

In Visage Tracker Unity Demo, the tiger face mask effect is achieved using the same principles as in Showcase Demo. Details of implementation can be found in Tracker.cs file (located in visageSDK\Samples\Unity\VisageTrackerUnityDemo\Assets\Scripts\ directory) by searching for keyword "tiger".

...

Depending on the platform, it’s already possible, out of the box.

...

Make sure that you have had followed all the steps from the documentation Building and running Unity application.

...

It seems that the slowdown is a the result of not optimal frame mirroring (our native Windows plugin mirrors the image). Track function itself seem to be good enough. There are further optimizations that can be implemented. However, however for a quick fix solution, try to turn off frame mirroring in Unity project by setting Mirrored property to 0 in the Unity Editor Property page for the Tracker object.

...

Pixels manipulation is an expensive operation. To avoid this operation it, please turn off isMirrored value which is by default set to 1 and adjust the mirroring property in the setting of camera itself if needed.

...