...
Info |
---|
See also: |
...
Obtain new library files (usually in visageSDK/lib) (.dll, .so, .a, .lib) and overwrite the old ones in your application.
Obtain new header files (usually in visageSDK/include) and overwrite the old ones in your application.
Obtain new data files (usually in visageSDK/Samples/data) and overwrite the old ones in your application (models as well).
Read about the changes in parameters of the configuration files and apply them to your configuration. In case you use the default configuration, just overwrite it with the new one.
...
Regarding GDPR and privacy issues in general: visage|SDK runs entirely on the client device and it does not store or transmit any personal information (photos, names, or similar), or even any other information (other than Visage Technologies License Key File, for licensing purposes).
Languages, platforms, tools, etc.
Can I use visage|SDK with Unity?
...
You can either start your own project from scratch (Unity3D integration) , or look for the the VisageTrackerUnityDemo sample that comes with full source code so you can use it as know-how for your project.
...
For users willing to experiment, there is an undocumented and currently unsupported workaround that should allow to use using it.
visage|SDK is implemented in C++. It provides C++ and C# APIs. On Windows, the C# API is implemented as a C++/CLI library (VisageCSWrapper.dll) which, in theory, should be usable in other NET languages, e.g. VB.NET. However, we have not tested this. The C# API documentation may be used as a basis for usage of using VisageCSWrapper C++/CLI library in VB.NET.
...
visage|SDK is implemented in C++ and provides C++ API which cannot easily be used directly in Python without a C functions wrapper. visage|SDK provides such a wrapper in the form of VisageTrackerUnityPlugin which was made specifically for integration with Unity 3D engine. However, it can also be used by other applications/languages that support importing C functions from a library. At its core, the VisageTrackerUnityPlugin is a high-level C functions API wrapper around C++ API. In the case of Python, ctypes library (foreign function library for Python) can be used to import and use C functions from VisageTrackerUnityPlugin. As the source code is provided, VisageTrackerUnityPlugin can also be used to implement a custom Python wrapper.
Even though it was tested, such usage of VisageTrackerUnityPlugin is not officially supported.
...
For users willing to experiment, there is an undocumented workaround that allows the use of visage|SDK in React Native.
...
For users willing to experiment, there is an undocumented workaround that allows the use of visage|SDK in Flutter.
visage|SDK is implemented in C++ and provides C++ API. Therefore, direct calls from Flutter are not possible without a wrapper with C-interface functions. An example of such a wrapper is provided in visage|SDK in the form of VisageTrackerUnity Plugin which provides simpler, high-level API through a C-interface. It is intended for integration with Unity 3D (in C# with P/Invoke), but it can also be used by other applications/languages that support importing and calling C-functions from a native library, including Flutter.
...
A way without wrapping NSObject is by wrapping C++ API in C-interface functions and exposing them through bridging-header. Example An example of such a wrapper is provided in visage|SDK in the form of VisageTrackerUnityPlugin which provides simpler, high-level API through a C-interface.
...
We believe that it should be possible to use visage|SDK in WebView, but we have not tried that nor have any clients currently who have done that so we cannot guarantee it. Performance The performance will almost certainly be lower than with a native app.
...
Please note that the current HTML5 demos have not been optimized for use in mobile browsers. Therefore, for the best results, it is recommended to use a desktop browser.
...
Please note that the current HTML5 demos have not been optimized for use in mobile browsers. Therefore, for the best results, it is recommended to use a desktop browser.
...
Internet connection is needed only for license registration, which happens only once , when the application is used for the first time.
For hardware devices, this can typically be done at installation time – you would connect the device to the network, run the application once, the license registration would happen automatically, and after that, no network connection is needed.
...
Yes, visage|SDK can be used with any IP camera. However, note that visage|SDK actually does not include camera support as part of the API; the API simply takes a bitmap image. Image grabbing from a camera is implemented at the application level.
...
Processing and saving information to a file can be implemented in parts of the sample projects where tracking from a video file is performed:
In ShowcaseDemo project on Windows, the appropriate function is
worker_DoWorkVideo(
.In FaceTracker2 sample project on Windows, the appropriate function is
CVisionExampleDoc::trackFromVideo()
.In VisageTrackerDemo sample project on macOS, the appropriate function is
trackingVideoThread()
.
Please search the source code of each sample for the exact location.
...
Our face tracking algorithm is among the best ones available, but like all Computer Vision algorithms, it has its limits related to image quality, light conditions, occlusions, or specific reasons such as head pose.
...
visage|SDK includes active liveness detection. The user is required to perform a simple facial gesture (smile, blink, or raise eyebrows). Face tracking is then used to verify that the gesture is actually performed. You can configure which gesture(s) you want to include. As the app developer, you also need to take care of displaying appropriate messages to the user.
All visage|SDK packages include the API for liveness detection. However, only the visage|SDK for Windows and visage|SDK for Android contain contains a ready-to-run demo of Liveness Detection. So, for a quick test of the liveness detection function, it would probably be the easiest to downoad download visage|SDK for Windows, run “DEMO_FaceTracker2.exe” and select “Perform Liveness” from the Liveness menu.
...
Locate the face in the image:
To locate the face, you can use detection (for a single image) or tracking (for a series of images from a camera feed).
See function VisageTracker::track() or VisageFeaturesDetector::detectFacialFeatures().
Each of these functions returns the number of faces in the image - if there is not exactly one face, you may report an error or take other actions.
Furthermore, these functions return the FaceData structure for each detected face, containing the face location.
Use VisageFaceRecognition.::addDescriptor() to get the face descriptor and add it to the gallery of known faces together with the name or ID of the person.
The descriptor is an array of short integers that describes the face - similar faces will have similar descriptors.
The gallery is simply a database of face descriptors, each with an attached ID.
Note that you could store the descriptors in your own database, without using the provided gallery implementation.
Save the gallery usingVisageFaceRecognition.SaveGallery::saveGallery().
Matching
An At this stage, you match a new facial image (for example, a person arriving at a gate, reception, control point or similar) against the previously stored gallery, and obtain IDs of one or more most similar persons registered in the gallery.
First, locate the face(s) in the new image.
The steps are the same as explained above in the Registration part. You obtain a FaceData structure for each located face.
Pass the FaceData to VisageFaceRecognition.ExtractDescriptor::extractDescriptor() to get the face descriptor of the person.
Pass this descriptor to VisageFaceRecognition.Recognize::recognize(), which will match it to all the descriptors you have previously stored in the gallery and return the name/ID of the most similar person (or a the desired number of most similar persons);
the Recognize() function also returns a similarity value, which you may use to cut off the false positives.
...
In each of the two images (live face and ID image), the face first needs to be located:
To locate the face, you can use detection (for a single image) or tracking (for a series of images from a camera feed).
Each of these functions returns the number of faces in the image - if there is not exactly one face, you may report an error or take other actions.
Furthermore, these functions return the FaceData structure for each detected face, containing the face location.
Note: the ID image should be cropped so that the ID is occupying most of the image (if the face on the ID is too small relative to the whole image it might not be detected).
The next step is to extract a face descriptor from each image. The descriptor is an array of short integers that describes the face. Similar faces will have similar descriptors.
From the previous step, you have one FaceData structure for the ID image and one FaceData structure for the live image.
Pass each image with its corresponding FaceData to the function VisageFaceRecognition.ExtractDescriptor::extractDescriptor().
Pass the two descriptors to the function VisageFaceRecognition::descriptorsSimilarity() to compare the two descriptors to each other and obtain the measure of their similarity. This is a float value between 0 (no similarity) and 1 (perfect similarity).
If the similarity is greater than the chosen threshold, consider that the live face matches the ID face.
By choosing the threshold, you control the trade-off between False Positives and False Negatives:
If the threshold is very high, there will be virtually no False Positives, i.e. the system will never declare a correct match when, in reality, the live person is not the person in the ID.
However, with a very high threshold, a False Negative may happen more often - not matching a person who really is the same as in the ID, resulting in an alert that will need to be handled in an appropriate way (probably requiring human intervention).
Conversely, with a very low threshold, such “false alert” will virtually never be raised, but the system may then fail to detect True Negatives - the cases when the live person really does not match the ID.
There is no “correct” threshold , because it depends on the priority of a specific application. If the priority is to avoid false alerts, the threshold may be lower; if the priority is to avoid undetected non-matches, then the threshold should be higher.
...
visage|SDK does not have an out-of-the-box option to determine if the person is looking at the screen. However, it should not be too difficult to implement that. What visage|SDK does provide is:
...
Please also note that the estimated gaze direction may be a bit unstable (the gaze vectors appearing “shaky”) due to the difficulty of accurately locating the pupils. At the same time, the 3D head pose (head direction) is much more stable. Because people usually turn their head heads in the direction in which they are looking, it may also be interesting to use the head pose as the approximation of the direction of gaze.
...
Code Block |
---|
// formula to get screen space gaze x = faceData->faceTranslation[2] * tan(faceData->faceRotation[1] + faceData->gazeDirection[0] + rOffsetX) / screenWidth; // rOffsetX angle of camera in relation to screen, ideally 0 y = faceData->faceTranslation[2] * tan(faceData->faceRotation[0] + faceData->gazeDirection[1] + rOffsetY) / screenHeight; // rOffsetY angle of camera in relation to screen, ideally 0 // apply head and camera offset x += -(faceData->faceTranslation[0] + camOffsetX); // camOffsetX in meters from left edge of the screen y += -(faceData->faceTranslation[1] + camOffsetY); // camOffsetY in meters from top edge of the screen |
Info | ||
---|---|---|
| ||
See also: | ||
Note | ||
Need to insert links to relevant API parts in text and/or as “See also” section. |
Can images of crowds be processed?
...
Face tracking is limited to 20 faces (for performance reasons). To locate more faces in the image, use face detection (class
FacialFeaturesDetector
).visage|SDK is capable of detecting/tracking faces whose size in the image is at least 5% of the image width (height in case of portrait images).
The default setting for the
VisageFeaturesDetector
is is to detect faces larger than 10% of the image size and 15% in case of theVisageTracker
. The default parameter for minimal face scale needs to be modified to process smaller faces.If you are using high resolution images with many faces, so that each face is smaller than 5% of image width, one solution may be to divide the image into portions and process each portion separately. Alternatively, a custom version of visage|SDK may be discussed.
For optimal performance of algorithms for face recognition and analysis (age, gender, emotion), faces should be at least 100 pixels wide.
Warning |
---|
Revise this section, especially on image size |
Info | ||
---|---|---|
| ||
See also:
|
I've seen the tiger mask in your demo - how I can build my own masks?
...
Our sample projects give you a good starting point for implementing your own masks and other effects using powerful mainstream graphics applications (such as Blender, Photoshop, Unity 3D, and others). Specifically:
Showcase Demo
This demoShowcaseDemo, available as a sample project with full source code, includes a basic face mask (the tiger mask) effect. It is implemented by creating a face mesh during run-time, based on data provided by VisageTracker through FaceData::faceModel* class members, applying a tiger texture and rendering it with OpenGL.
The mesh uses static texture coordinates so it is fairly simple to replace the texture image and use other themes instead of the tiger mask. We provide the texture image in a form that makes it fairly easy to create other textures in Photoshop , and use these textures as a face mask. This is the template texture file (jk_300_textureTemplate.png) found in visageSDK\Samples\data\ directory. You can simply create a texture image with facial features (mouth, nose, etc.) placed according to the template image, and use this texture instead of the tiger. You can modify texture in Showcase Demo the ShowcaseDemo sample by changing the texture file which is set in line 331 of ShowcaseDemo.xaml.cs source file:
Code Block |
---|
331: gVisageRendering.SetTextureImage(LoadImage(@"..\Samples\OpenGL\data\ShowcaseDemo\tiger_texture.png")); |
Visage Tracker Unity Demo
This For a sample project is based on Unity 3D, see VisageTrackerUnityDemo page. It includes the tiger mask effect and a 3D model (glasses) superimposed on the face. Unity 3D is an extremely powerful game/3D engine that gives you much more choices and freedom in implementing your effects, while starting from the basic ones provided in our project. For more information on the sample project, see Samples – Unity 3D – Visage Tracker Unity Demo in the Documentation. Furthermore, the Furthermore, the “Animation and AR modeling guide” document, available in the Documentation under the link “References”“Resources”, explains how to create and import a 3D model to overlay on the face, which may also be of interest to you.
In Visage Tracker Unity DemoVisageTrackerUnityDemo, the tiger face mask effect is achieved using the same principles as in Showcase DemoShowcaseDemo. Details of implementation can be found in Tracker.cs file (located in visageSDK\Samples\Unity\VisageTrackerUnityDemo\Assets\Scripts\ directory) by searching for keyword "tiger".
Troubleshooting
I am using visage|SDK FaceRecognition gallery with a large number of descriptors (100.000+) and it takes 2 minutes to load the gallery. What can I do about it?
...
Use API functions that provide raw descriptor output (extractDescritor) the function VisageFaceRecognition::extractDescriptor()) and descriptor similarity comparison function (descriptorSimilarity(the function VisageFaceRecognition::descriptorsSimilarity()) to implement your own gallery solution in the technology of your choice that is appropriate for your use case.
Info | ||
---|---|---|
| ||
See also: | ||
Note | ||
Need to insert links to relevant API parts in text and/or as “See also” section. |
I want to change the camera resolution in Unity application VisageTrackerDemo. Is this supported and how can I do this?
...
Make sure that you had followed all the steps from the documentation Building and running Unity application.
...
...
Need to insert links to relevant API parts in text and/or as “See also” section.
Verify that the build target and visage|SDK platform match. For example, running visage|SDK for Windows inside the Unity Editor and for a Standalone application will work since both are run on the same platform. Attempting to run visage|SDK for iOS inside the Unity Editor on a MacOS will output an error because iOS architecture does not match MacOS architecture.
...
Note |
---|
Need to insert links to relevant API parts in text and/or as “See also” section. |
Errors concerning 'NSString' or other Foundation classes encountered in client project which includes iOS sample files (VisageRendering.cpp, etc.)?
It is neccessary to make sure that VisageRendering.cpp is compiled as an Objective-C++ source, and not as a C++ source by changing 'Type' property of the file on the right-hand property side in the Xcode editor. This applies generally to any source which includes/imports (directly or indirectly) any Apple classes.
Troubleshooting - Licensing Errors
...