Table of Contents |
---|
...
...
...
...
...
...
...
...
...
Note |
---|
This is a reference to the deprecated offline documentation. It needs to be replaced with new instructions how to obtain SDK version. |
...
...
...
...
Note |
---|
This FAQ entry contains pointers to the visage|SDK offline documentation which is deprecated. These pointers need to be replaced with links to online documentation. |
Can I use visage|SDK with C#?
...
...
...
...
...
...
...
...
...
...
Can I use visage|SDK with Java?
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
A way without wrapping NSObject is by wrapping C++ API in C-interface functions and exposing them through bridging-header. Example of such a wrapper is provided in visage|SDK in the form of VisageTrackerUnity Plugin which provides simpler, high-level API through a C-interface.
General example how this technique is usually implemented can be found here:
https://www.swiftprogrammer.info/swift_call_cpp.html
...
...
...
...
...
Important notes:
Will visage|SDK for HTML5 work in browsers on smartphones?
...
The package you have received is visage|SDK 8.4. The latest release is visage|SDK 8.7 but that is not available for rPI yet. If your initial tests prove interesting we will need to discuss the possibility to build the latest version on-demand for you. visage|SDK 8.7 provides better performance and accuracy, but the API is mostly unchanged so you can run relevant initial tests.
...
visage|SDK 8.4 for rPI has been tested with rPI3b+; it should work with rPI4 but we have not tested that.
...
On iOS, the HTML5 demos work in Safari browser version 11 and higher. They do not work in Chrome and Firefox browsers due to limitations on camera access.
(https://stackoverflow.com/questions/59190472/webrtc-in-chrome-ios)
Which browsers does visage|SDK for HTML5 support?
The HTML5 demo page contains the list of supported browsers: https://visagetechnologies.com/demo/#supported-browsers
Please note that the current HTML5 demos have not been optimized for use in mobile browsers so for best results it is recommended to use a desktop browser.
Internet connection and privacy issues
Does my device/app need internet connection?
Applications developed using visage|SDK do not need an internet connection for operation.
Internet connection is needed only for license registration, which happens only once, when the application is used for the first time.
For hardware devices, this can typically be done at installation time – you would connect the device to the network, run the application once, the license registration happens automatically, and after that no network connection is needed.
How do you handle privacy of user data? Does visage|SDK store or send it to a cloud server?
visage|SDK never transfers any user information to any server.
visage|SDK never automatically stores any user information locally,
In summary, you as application developer have full control and responsibility for any storage or transfer of user data that you may implement in your application.
Cameras, hardware
Can I use visage|SDK with an IP camera?
Yes, visage|SDK can be used with any IP camera. However, note that visage|SDK actually does not include camera support as part of the API; the API simply takes a bitmap image. Image grabbing from camera is implemented at the application level.
Our sample projects show how to access local cameras and we currently have no ready-to-go code for accessing IP cameras. There are various ways to grab images from IP cameras, for example using libVLC.
Usually, IP cameras should provide URL from which you can access the raw camera feed and if you can open the URL and see the feed in VideoLAN VLC Media Player the same URL can be used in libVLC to access the feed programmatically.
Info |
---|
See also:
|
Are these camera specs ok for visage|SDK: 1920 x 1080, 30 fps, mono?
The mentioned camera parameters (1920 x 1080, 30 fps, mono) should be appropriate for our software and also for most of the use cases.
In general, visage|SDK works with a very wide range of camera resolutions and can provide sustainable tracking on faces as small as 30×30 pixels. For further details on inputs and other features specifically for face tracking please see https://visagetechnologies.com/facetrack-features/
What should be the camera Field of View (FoV)?
Regarding the FoV, I think that the selection should primarily depend on the use case (planned position of the camera and the captured scene). Our algorithm should be robust enough to handle some optical distortions that may be a consequence of lenses with large FoV, however extreme distortions (e.g. fish-eye lens) will negatively affect the algorithm's performance.
Face Tracking
Can I process a video and save tracking data/information to a file?
There is no ready-made application to do this, but it can be achieved by modifying existing sample projects. Such modification should be simple to do for any software developer, based on the documentation delivered as part of visage|SDK. We provide some instructions here.
To get you started each platform specific visage|SDK package contains sample projects with full source code that can be modified for that purpose. In the documentation, click on Samples to find the documentation of the specific sample projects.
On Windows there are Visual Studio 2015 samples: ShowCaseDemo which is written in C# and FaceTracker2 which is written in C++.
On macOS there is Xcode sample: VisageTrackerDemo which is written in Objective-C++.
Processing and saving information to a file can be implemented in parts of the sample projects where tracking from video file is performed:
In Showcase Demo project on Windows, the appropriate function is worker_DoWorkVideo().
In FaceTracker2 sample project on Windows, the appropriate function is CVisionExampleDoc::trackFromVideo().
In VisageTrackerDemo sample project on macOS, the appropriate function is trackingVideoThread().
Please search source code of each sample for exact location.
Important note: sample projects have “video_file_sync“ (or similarly named functionality) enabled which skips video frames if tracking is slower than real-time. This functionality should be disabled for full video processing i.e processing of all video frames.
Info |
---|
See also: |
How many faces can be tracked simultaneously?
The maximum number of faces that can be tracked simultaneously is internally limited to 20 for performance reasons.
Using VisageFeaturesDetector, any number of faces can be detected in an image.
How far from the camera can the face be tracked?
The tracking distance depends on the camera resolution. For example, for a webcam with resolution 1920×1080 (Logitech C920), tracker can be configured to detect and track faces up to ~7.25 meters from the camera (with performance tradeoff).
Face tracking does not work as I expected
Our face tracking algorithm is among the best available, but like all Computer Vision algorithms it has its limits related to image quality, light conditions, occlusions or specific reasons such as head pose.
If you notice specific issues or have special requirements, you may send us your test video footage and any specific requests, and we will process it and send you back the tracking results. This may allow us to fine-tune the tracker configuration to your specific requirements and send you the best possible results.
How can I test and use ear tracking?
The fastest way to test ear tracking is in the online Showcase Demo (https://visagetechnologies.com/demo/); simply enable Ears in the Draw Options menu.
The online demo is based on visage|SDK for HTML5 and has some limitations due to the HTML5 implementation. For even better performance, you may want to download other visage|SDK for Windows, Android or iOS - each of these packages contains a native Showcase Demo in which ear tracking can be tried.
If you are already developing using visage|SDK and want to enable ear tracking, you can use the ready-made configuration file “Facial Features Tracker - High - With Ears.cfg“. Ear tracking is enabled using the refine_ears configuration parameter.
Note |
---|
Add See also section with link to the Configuration Manual, to the API for setting configuration parameters and to FaceData structure for accessing the tracking results. |
Face Recognition
How far from a camera can a face be recognized?
This depends on camera resolution. Face Recognition works best when the size of the face in the image is 150 pixels or more.
Info |
---|
See also: |
High-level functionalities
How do I perform liveness detection with visage|SDK?
visage|SDK includes active liveness detection: the user is required to perform a simple facial gesture (smile, blink or eyebrow raising) and face tracking is then used to verify that the gesture is actually performed. You can configure which gesture(s) you want to include. As app developer you also need to take care of displaying appropriate messages to the user.
All visage|SDK packages include the API for liveness detection. However, only the visage|SDK for Windows and visage|SDK for Android contain a ready-to-run demo of Liveness Detection. So, for a quick test of the liveness detection function it would probably be the easiest to downoad visage|SDK for Windows, run “DEMO_FaceTracker2.exe” and select “Perform Liveness” from the Liveness menu.
The technical demos in Android and Windows packages of visage|SDK include the source code intended to help you integrate liveness detection into your own application.
Note |
---|
Need to insert links to the documentation of these sample projects and the the Liveness API. |
How do I perform identification of a person from a database?
This article outlines the implementation of using face recognition for identifying a person from a database of known people. It may be applied to cases such as whitelists for access control or attendance management, blacklists for alerts and similar. The main processes involved in implementing this scenario are registration and matching, as follows.
Registration
Assuming that you have an image and an ID (name, number or similar) for each person, you register each person by storing their face descriptor into a gallery (database). For each person, the process is as follows:
Locate the face in the image:
To locate the face, you can use detection (for a single image) or tracking (for a series of images from a camera feed).
See function VisageSDK::VisageTracker::track() or VisageFeaturesDetector::detectFacialFeatures().
Each of these functions returns the number of faces in the image - if there is not exactly one face you may report an error or take other action.
Furthermore, these functions return the FaceData structure for each detected face, containing the face location.
Use VisageFaceRecognition.AddDescriptor() to get the face descriptor and add it to the gallery of known faces together with the name or ID of the person.
The descriptor is an array of short integers that describes the face - similar faces will have similar descriptors.
The gallery is simply a database of face descriptors, each with attached ID.
Note that you could store the descriptors in your own database, without using the provided gallery implementation.
Save the gallery using VisageFaceRecognition.SaveGallery().
Matching
In this stage, you match a new facial image (for example, a person arriving at a gate, reception, control point or similar) against the previously stored gallery, and obtain IDs of one or more most similar persons registered in the gallery.
First, locate the face(s) in the new image.
The steps are the same as explained above in the Registration part. You obtain a FaceData structure for each located face.
Pass the FaceData to VisageFaceRecognition.ExtractDescriptor() to get the face descriptor of the person.
Pass this descriptor to VisageFaceRecognition.Recognize(), which will match it to all the descriptors you have previously stored in the gallery and return the name/ID of the most similar person (or a desired number of most similar persons);
the Recognize() function also returns a similarity value, which you may use to cut off the false positives.
Note |
---|
Need to insert links to relevant API parts in text and/or as “See also” section. |
How do I perform verification of a live face vs. ID photo?
The scenario is verification of a live face image against the image of a face from an ID. It is done in four main steps:
Locate the face in each of the two images;
Extract the face descriptor from each of the two faces;
Compare the two descriptors to obtain the similarity value;
Compare the similarity value to a chosen threshold, resulting in a match or non-match.
These steps are described here with further detail:
In each of the two images (live face and ID image), the face first needs to be located:
To locate the face, you can use detection (for a single image) or tracking (for a series of images from a camera feed).
See function VisageSDK::VisageTracker::track() or VisageFeaturesDetector::detectFacialFeatures().
Each of these functions returns the number of faces in the image - if there is not exactly one face you may report an error or take other action.
Furthermore, these functions return the FaceData structure for each detected face, containing the face location.
Note: the ID image should be cropped so that the ID is occupying most of the image (if the face on the ID is too small relative to the whole image it might not be detected).
The next step is to extract a face descriptor from each image. The descriptor is an array of short integers that describes the face - similar faces will have similar descriptors.
From the previous step you have one FaceData structure for the ID image and one FaceData structure for the live image.
Pass each image with its corresponding FaceData to the function VisageFaceRecognition::extractDescriptor().
Pass the two descriptors to the function VisageFaceRecognition::descriptorsSimilarity() to compare the two descriptors to each other and obtain the measure of their similarity. This is a float value between 0 (no similarity) and 1 (perfect similarity).
If the similarity is greater than a chosen threshold, consider that the live face matches the ID face.
By choosing the threshold, you control the trade-off between False Positives and False Negatives:
If the threshold is very high, there will be virtually no False Positives, i.e. the system will never declare a correct match when in reality the live person is not the person in the ID.
However, with a very high threshold a False Negative may happen more often - not matching a person who really is the same as in the ID, resulting in an alert that will need to be handled in an appropriate way (probably requiring human intervention).
Conversely, with a very low threshold, such “false alert” will virtually never be raised, but the system may then fail to detect True Negatives - the cases when the live person really does not match the ID.
There is no “correct” threshold, because it depends on the priority of a specific application. If the priority is to avoid false alerts, threshold may be lower; if the priority is to avoid undetected non-matches then the threshold should be higher.
Note |
---|
Need to insert links to relevant API parts in text and/or as “See also” section. |
Troubleshooting - Licensing Errors
The following articles describe the various error codes produced by the Visage Technologies licensing system, and what may be done to remedy each error. They are valid for visage|SDK version 8.6b1 and higher.
Error code 0x00000001 (VS_ERROR_INVALID_LICENSE)
The error you received indicates that the issued license is not valid. The error occurs when the BundleID is not the same as the one in your application, or when license that is registered to one product is used with another product. Can you please check if you’re using the correct license?
Error code 0x00000002 (VS_ERROR_EXPIRED_LICENSE)
The error you received indicates that the issued license has expired. Would you be interested in renewing your license?
Error code 0x00000004 (VS_ERROR_EARLIER_VERSION_LICENSE)
The error you received indicates that the license version you are using is out of date, so you would need to update to the newest available version. Would you be interested in updating your license?
Error code 0x00000008 (VS_ERROR_MISSING_KEYFILE)
The error you received indicates that the application cannot locate license key file. Please verify that the license key file is present in the folder used as a path to initialize license manager. For more details please follow the instructions from Documentation -> Licensing.
Error code 0x00000010 (VS_ERROR_NO_LICENSE)
The error you received indicates that there is currently no license available. Please follow the instructions from Documentation -> Licensing on how to correctly use the license key in your application.
Error code 0x00000020 (VS_ERROR_CORRUPT_VERSION_STRING)
The error you received indicates that the license keys were somehow modified. Please try using a clean copy of the license key that was sent to you to see if the error still occurs.
Error code 0x00000040 (VS_ERROR_LICENSE_VALIDATION_FAILURE)
The error you received indicates that the server has rejected the license. This was because <reason>.
[Internal] - Check on the VTLS server for the reason why license was rejected. It can be one of the following (the most common being -6):
-1 - License Key sent to VTLS server for validation was malformed
-2 - Device ID sent to VTLS server for validation was malformed
-3 - License does not exist on VTLS server
-4 - License is blocked on VTLS server
-5 - License expired
-6 - Installations limit reached
-7 - Concurrent users limit reached
Error code 0x00000080 (VS_ERROR_NC_CONNECTION_FAILED)
The error you received indicates that there was a failure in connection to our licensing server. Can you please let me know if you want to use offline or online activation?
Error code 0x00000100 (VS_ERROR_TEMPERED_KEYFILE)
The error you received indicates that the license keys were somehow modified. Please try using a clean copy of the license key that was sent to you to see if the error still occurs.
Error code 0x00000200 (VS_ERROR_TEMPERED_KEYSTRING)
The error you received indicates that the license keys were somehow modified. Please try using a clean copy of the license key that was sent to you to see if the error still occurs.
Error code 0x00000400 (VS_ERROR_TEMPERED_DATE)
The error you received indicates that the date on the computer was modified. Please make sure the date on the computer is correct.
Error code 0x00000800 (VS_ERROR_UNREADABLE_KEYSTRING)
The error you received indicates that the license keys were somehow modified. Please try using a clean copy of the license key that was sent to you to see if the error still occurs.
Error code 0x00001000 (VS_ERROR_INVALID_OS)
The error you received indicates that your license key does not support the platform on which you are trying to use it on. Can you please let me know on which platforms you'd like to use visage|SDK?
Error code 0x00002000 (VS_ERROR_INVALID_URL)
The error you received indicates that the license was issued for a different domain, can you please confirm the domain name on which you want to use the visage|SDK?
Troubleshooting
I am using visage|SDK FaceRecognition gallery with a large number of descriptors (100.000+) and it takes 2 minutes to load the gallery. What can I do about it?
visage|SDK FaceRecognition simple gallery implementation was not designed for a use case with a large number of descriptors.
Use API functions that provide raw descriptor output (extractDescritor) and descriptor similarity comparison function (descriptorSimilarity) to implement your own gallery solution in technology of your choice is appropriate for your use case.
Note |
---|
Need to insert links to relevant API parts in text and/or as “See also” section. |
I want to change the camera resolution in Unity application VisageTrackerDemo. Is this supported and how can I do this?
Depending on the platform it’s already possible, out of the box.
On Windows and Android, the camera resolution can be changed via Tracker object properties defaultCameraWidth and defaultCameraHeight within the Unity Editor. When the default value -1 is used, the resolution is set to 800 x 600 in the native VisageTrackerUnityPlugin.
On iOS it’s not possible to change the resolution out of the box from the demo application. The camera resolution is hard-coded to 480 x 600 within the native VisageTrackerUnityPlugin.
VisageTrackerUnityPlugin is provided with full source code within the package distribution.
I am getting EntryPointNotFoundException when attempting to run a Unity application in the Editor. Why does my Unity application does not work?
Make sure that you have followed all the steps from the documentation Building and running Unity application.
Note |
---|
Need to insert links to relevant API parts in text and/or as “See also” section. |
...