Page tree
Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 37 Next »



General questions

What skills do I need in order to use visage|SDK?

Software development skills are required to make use of visage|SDK.

visage|SDK is a Software Development Kit - a set of documented software libraries that software developers can use to integrate Visage Technologies' face tracking, analysis and recognition algorithms into their applications.

Can I install your software on my computer and just run it?

No. visage|SDK is not a finalized application, but a Software Development Kit that you use to develop/integrate your application. This is what your software development team can hopefully do, based on the documentation delivered as part of visage|SDK.

Can you develop an application for me?

Visage Technologies' software development team is available to develop custom-made applications to your requirements, using our face technology as the basis (we don’t do general app development). To do that, we need specific and detailed requirements for the application, including:

  • What should the application do?

  • What sort of user interface should it have?

  • On what kind of computer/device should it run – Windows? iOS? Android? Other?

Some additional remarks on the way of working:

  • Please note that these are just initial questions to get the discussion going, quite a bit more detail would probably be required, depending on the complexity of your requirements.

  • Based on the requirements worked out to a suitable level, we could make a Project Proposal, including the time and cost, for building your application.

  • Please note that working out your requirements and preparing the Project Proposal may be considerable work in itself so, depending on the complexity of your requirements, we may need to charge for that work too.

Your contact person can advise on further details.

Am I entitled to receive technical support?

  • Most our licenses include initial 5 hours of support, so if you have purchased a license for visage|SDK, then you can use this support (delivered via email).

  • For the majority of our clients the initial support hours are more than sufficient, but it is also possible to order additional support - your contact person can advise you on this.

  • If you are evaluating visage|SDK and have technical issues, we will do our best, within reasonable limits, to support your evaluation.

How to request technical support?

If you have a technical issue using visage|SDK, please email your Visage Technologies contact person the following information:

  • error report, including the error messages you receive and anything else you think may help our team resolve the issue,

  • the operating system you are using,

  • the version of visage|SDK (please see it in the upper left corner of the documentation as in the image below).

This is a reference to the deprecated offline documentation. It needs to be replaced with new instructions how to obtain SDK version.

How do I migrate to a newer version of visage|SDK?

Migration to a new version of visage|SDK is covered in migration.html page within visage|SDK Documentation.

This is a reference to the deprecated offline documentation. It needs to be replaced with new instructions how to obtain SDK version.

Typical steps when upgrading visage|SDK are:

  1. Obtain new library files (usually in visageSDK/lib) (.dll, .so, .a, .lib) and overwrite the old ones in your application

  2. Obtain new header files (usually in visageSDK/include) and overwrite the old ones in your application

  3. Obtain new data files (usually in visageSDK/Samples/data) and overwrite the old ones in your application (models as well).

  4. Read about the changes in parameters of the configuration files and apply them to your configuration. In case you use default configuration, just overwrite it with the new one.

Languages, platforms, tools etc.

Can I use visage|SDK with Unity?

visage|SDK packages for Windows, iOS, Android, MAC OS X and HTML5 each provide Unity integration including sample Unity projects with full source code. For more details, please look in the documentation.html which can be found in the root folder of every package of visage|SDK. Specifically, in the documentation, if you click on Samples, then Unity3D, you will find information about Unity integration and relevant sample projects.

This FAQ entry contains pointers to the visage|SDK offline documentation which is deprecated. These pointers need to be replaced with links to online documentation.

Can I use visage|SDK with C#?


For the C# API, please look in the documentation.html (in visage|SDK root folder), under API\C#. This is a managed C# wrapper that exposes all of visage|SDK functionalities – face tracking, analysis and recognition.

The C# API is implemented in libVisageCSWrapper.dll library (libVisageVision.dll is required for running).

Additionally, we provide VisageTrackerUnity Plugin, which is a C# wrapper made specifically for integration with Unity 3D engine. For more information, please take a look at the documentation.html under Samples\Unity3D. The VisageTrackerUnity Plugin comes included with visage|SDK for Windows, MAC OS X, iOS and Android.

This FAQ entry contains pointers to the visage|SDK offline documentation which is deprecated. These pointers need to be replaced with links to online documentation.

Can I use visage|SDK with Java?

visage|SDK is implemented in C++. It provides C++ and C# APIs, but unfortunately currently does not include a Java API. visage|SDK for Android includes JNI-based Java wrappers as part of the sample projects provided in the package; you could use these wrappers as a starting point for your own projects in Java. In conclusion, it is certainly possible to use visage|SDK with Java, but it requires some additional effort for interfacing via JNA wrappers.

Can I use visage|SDK with VB.NET (Visual Basic)?


visage|SDK does not currently provide a VB.NET API.
For users willing to experiment, there is an undocumented and currently unsupported workaround that should allow to use it.
visage|SDK is implemented in C++. It provides C++ and C# APIs. On Windows, the C# API is implemented as a C++/CLI library (VisageCSWrapper) which, in theory, should be usable in other NET languages, e.g. VB.NET. However, we have not tested this. The C# API documentation may be used as a basis for usage of VisageCSWrapper C++/CLI library in VB.NET.

Can I use visage|SDK with Python?


visage|SDK does not currently provide a Python API.
For users willing to experiment, there is an undocumented workaround that allows users to use Python.
visage|SDK is implemented in C++ and provides C++ API which cannot easily be used directly in Python without a C functions wrapper. visage|SDK provides such a wrapper in the form of VisageTrackerUnity Plugin which was made specifically for integration with Unity 3D engine however it can also be used by other applications/languages that support importing C functions from a library. At its core the VisageTrackerUnity Plugin is a high-level C functions API wrapper around C++ API. In the case of Python, ctypes library (foreign function library for Python) can be used to import and use C functions from VisageTrackerUnity Plugin. As source code is provided, VisageTrackerUnity Plugin can also be used to implement custom Python wrapper.
Even though it was tested, such usage of VisageTrackerUnity Plugin is not officially supported.

Can I use visage|SDK with UWP?

Our API is a lower-level C++ API and we have no specific UWP integration features, but we see no reason why you would not be able to use it in a UWP app.

Can I use visage|SDK with React Native?

visage|SDK does not currently provide direct support for React Native.
For users willing to experiment, there is an undocumented workaround that allows use of visage|SDK in React Native.
visage|SDK is implemented in C++ and provides C++ API, therefore direct calls from React Native are not possible without a wrapper with C-interface functions. Example of such a wrapper is provided in visage|SDK in the form of VisageTrackerUnity Plugin which provides simpler, high-level API through a C-interface. It is intended for integration with Unity3D (in C# with P/Invoke), however it can also be used by other applications/languages that support importing and calling C-functions from a native library, including React Native.

Can I use visage|SDK with Swift?

visage|SDK does not currently provide a Swift API.
visage|SDK is implemented in C++ and provides C++ API which cannot be used directly in Swift without first wrapping C++ API in an NSObject and exposing it to Swift through bridging-header. Wrapping in an NSObject does not have to be one-on-one mapping with C++ classes, instead it can be a higher level mapping and fragments of source code from provided iOS sample projects can be used as building blocks.
General example how this technique is usually implemented can be found here:
https://stackoverflow.com/questions/48971931/bridging-c-code-into-my-swift-code-what-file-extensions-go-to-which-c-based-l

Can I use visage|SDK in WebView (in iOS and Android)?

We believe that it should be possible to use visage|SDK in WebView, but we have not tried that nor have any clients currently who have done that so we can not give you a guarantee. Performance will almost certainly be lower than with a native app.

Is there visage|SDK for Raspberry PI?


visage|SDK 8.4 for rPI can be downloaded from the following link: https://www.visagetechnologies.com/downloads/visageSDK-rPI-linux_v8.4.tar.bz2
Once you unpack it, in the root folder you will find the documentation to guide you through the API and the available sample projects.

note

Important notes:

  • Because of very low demand we currently provide visage|SDK for rPI on-demand only.

  • The package you have received is visage|SDK 8.4. The latest release is visage|SDK 8.7 but that is not available for rPI yet. If your initial tests prove interesting we will need to discuss the possibility to build the latest version on-demand for you. visage|SDK 8.7 provides better performance and accuracy, but the API is mostly unchanged so you can run relevant initial tests.

  • visage|SDK 8.4 for rPI has been tested with rPI3b+; it should work with rPI4 but we have not tested that.

Important notes:

  • Because of very low demand we currently provide visage|SDK for rPI on-demand only.

  • The package you have received is visage|SDK 8.4. The latest release is visage|SDK 8.7 but that is not available for rPI yet. If your initial tests prove interesting we will need to discuss the possibility to build the latest version on-demand for you. visage|SDK 8.7 provides better performance and accuracy, but the API is mostly unchanged so you can run relevant initial tests.

  • visage|SDK 8.4 for rPI has been tested with rPI3b+; it should work with rPI4 but we have not tested that.

Will visage|SDK for HTML5 work in browsers on smartphones?


The HTML5 demo page contains the list of supported browsers: https://visagetechnologies.com/demo/#supported-browsers

Please note that the current HTML5 demos have not been optimized for use in mobile browsers so for best results it is recommended to use a desktop browser.

On iOS, the HTML5 demos work in Safari browser version 11 and higher. They do not work in Chrome and Firefox browsers due to limitations on camera access.
(https://stackoverflow.com/questions/59190472/webrtc-in-chrome-ios)

Which browsers does visage|SDK for HTML5 support?


The HTML5 demo page contains the list of supported browsers: https://visagetechnologies.com/demo/#supported-browsers

Please note that the current HTML5 demos have not been optimized for use in mobile browsers so for best results it is recommended to use a desktop browser.

Cameras, hardware

Can I use visage|SDK with an IP camera?

Yes, visage|SDK can be used with any IP camera. However, note that visage|SDK actually does not include camera support as part of the API; the API simply takes a bitmap image. Image grabbing from camera is implemented at the application level.

Our sample projects show how to access local cameras and we currently have no ready-to-go code for accessing IP cameras. There are various ways to grab images from IP cameras, for example using libVLC.
Usually, IP cameras should provide URL from which you can access the raw camera feed and if you can open the URL and see the feed in VideoLAN VLC Media Player the same URL can be used in libVLC to access the feed programmatically.

Are these camera specs ok for visage|SDK: 1920 x 1080, 30 fps, mono?

The mentioned camera parameters (1920 x 1080, 30 fps, mono) should be appropriate for our software and also for most of the use cases.

In general, visage|SDK works with a very wide range of camera resolutions and can provide sustainable tracking on faces as small as 30×30 pixels. For further details on inputs and other features specifically for face tracking please see https://visagetechnologies.com/facetrack-features/

What should be the camera Field of View (FoV)?

Regarding the FoV, I think that the selection should primarily depend on the use case (planned position of the camera and the captured scene). Our algorithm should be robust enough to handle some optical distortions that may be a consequence of lenses with large FoV, however extreme distortions (e.g. fish-eye lens) will negatively affect the algorithm's performance.

Face Tracking

How many faces can be tracked simultaneously?

The maximum number of faces that can be tracked simultaneously is internally limited to 20 for performance reasons.

Using VisageFeaturesDetector, any number of faces can be detected in an image.

How far from the camera can the face be tracked?

The tracking distance depends on the camera resolution. For example, for a webcam with resolution 1920×1080 (Logitech C920), tracker can be configured to detect and track faces up to ~7.25 meters from the camera (with performance tradeoff).

Face tracking does not work as I expected

Our face tracking algorithm is among the best available, but like all Computer Vision algorithms it has its limits related to image quality, light conditions, occlusions or specific reasons such as head pose.

If you notice specific issues or have special requirements, you may send us your test video footage and any specific requests, and we will process it and send you back the tracking results. This may allow us to fine-tune the tracker configuration to your specific requirements and send you the best possible results.

How can I test and use ear tracking?

The fastest way to test ear tracking is in the online Showcase Demo (https://visagetechnologies.com/demo/); simply enable Ears in the Draw Options menu.

The online demo is based on visage|SDK for HTML5 and has some limitations due to the HTML5 implementation. For even better performance, you may want to download other visage|SDK for Windows, Android or iOS - each of these packages contains a native Showcase Demo in which ear tracking can be tried.

If you are already developing using visage|SDK and want to enable ear tracking, you can use the ready-made configuration file “Facial Features Tracker - High - With Ears.cfg“. Ear tracking is enabled using the refine_ears configuration parameter.

Add See also section with link to the Configuration Manual, to the API for setting configuration parameters and to FaceData structure for accessing the tracking results.

Face Recognition

How far from a camera can a face be recognized?

This depends on camera resolution. Face Recognition works best when the size of the face in the image is 150 pixels or more.

High-level functionalities

How do I perform liveness detection with visage|SDK?

visage|SDK includes active liveness detection: the user is required to perform a simple facial gesture (smile, blink or eyebrow raising) and face tracking is then used to verify that the gesture is actually performed. You can configure which gesture(s) you want to include. As app developer you also need to take care of displaying appropriate messages to the user.

All visage|SDK packages include the API for liveness detection. However, only the visage|SDK for Windows and visage|SDK for Android contain a ready-to-run demo of Liveness Detection. So, for a quick test of the liveness detection function it would probably be the easiest to downoad visage|SDK for Windows, run “DEMO_FaceTracker2.exe” and select “Perform Liveness” from the Liveness menu.

The technical demos in Android and Windows packages of visage|SDK include the source code intended to help you integrate liveness detection into your own application.

Need to insert links to the documentation of these sample projects and the the Liveness API.

How do I perform identification of a person from a database?

These guidelines are written for a use case of identifying a student from a database of students. They can easily be used for other cases, such as employees.

The main steps involved in implementing the identification process are registration and identification, as follows.

  • Register all students in a school, let’s say 2000 of them, by doing the following for each (presuming you have their images):

    • run face tracker on the image to find the face (VisageTracker.Track()) and obtain the FaceData;

    • use VisageFaceRecognition.AddDescriptor () to get the face descriptor and add it to the gallery of known faces together with the name or ID of the student;

    • save the galery using VisageFaceRecognition.SaveGallery().

  • Then, for each person arriving at the identification point (gate, POS etc.):

    • run face tracker on live camera image to find the face (VisageTracker.Track()) and obtain the FaceData;

    • pass FaceData to VisageFaceRecognition.ExtractDescriptor() to get the face descriptor of the person;

    • pass this descriptor to VisageFaceRecognition.Recognize(), which will match it to all the descriptors you have previously stored in the gallery and return the name of the most similar student;

    • the Recognize() function also returns a similarity value, which you may use to cut off the false positives.

Need to insert links to relevant API parts in text and/or as “See also” section.

How do I perform verification of a live face vs. ID photo?

The scenario is verification of a live face image against the image of a face from an ID. It is done in four main steps:

  • Locate the face in each of the two images;

  • Extract the face descriptor from each of the two faces;

  • Compare the two descriptors to obtain the similarity value;

  • Compare the similarity value to a chosen threshold, resulting in a match or non-match.

These steps are described here with further detail:

  • In each of the two images (live face and ID image), the face first needs to be located:

    • To locate the face, you can use detection (for a single image) or tracking (for a series of images from a camera feed).

      • See function VisageSDK::VisageTracker::track() or VisageFeaturesDetector::detectFacialFeatures().

    • Each of these functions returns the number of faces in the image - if there is not exactly one face you may report an error or take other action.

    • Furthermore, these functions return the FaceData structure for each detected face, containing the face location.

    • Note: the ID image should be cropped so that the ID is occupying most of the image (if the face on the ID is too small relative to the whole image it might not be detected).

  • The next step is to extract a face descriptor from each image. The descriptor is an array of short integers that describes the face - similar faces will have similar descriptors.

    • From the previous step you have one FaceData structure for the ID image and one FaceData structure for the live image.

    • Pass each image with its corresponding FaceData to the function VisageFaceRecognition::extractDescriptor().

  • Pass the two descriptors to the function VisageFaceRecognition::descriptorsSimilarity() to compare the two descriptors to each other and obtain the measure of their similarity. This is a float value between 0 (no similarity) and 1 (perfect similarity).

  • If the similarity is greater than a chosen threshold, consider that the live face matches the ID face.

    • By choosing the threshold, you control the trade-off between False Positives and False Negatives:

      • If the threshold is very high, there will be virtually no False Positives, i.e. the system will never declare a correct match when in reality the live person is not the person in the ID.

      • However, with a very high threshold a False Negative may happen more often - not matching a person who really is the same as in the ID, resulting in an alert that will need to be handled in an appropriate way (probably requiring human intervention).

      • Conversely, with a very low threshold, such “false alert” will virtually never be raised, but the system may then fail to detect True Negatives - the cases when the live person really does not match the ID.

      • There is no “correct” threshold, because it depends on the priority of a specific application. If the priority is to avoid false alerts, threshold may be lower; if the priority is to avoid undetected non-matches then the threshold should be higher.

Need to insert links to relevant API parts in text and/or as “See also” section.

Troubleshooting

I am using visage|SDK FaceRecognition gallery with a large number of descriptors (100.000+) and it takes 2 minutes to load the gallery. What can I do about it?

visage|SDK FaceRecognition simple gallery implementation was not designed for a use case with a large number of descriptors.

Use API functions that provide raw descriptor output (extractDescritor) and descriptor similarity comparison function (descriptorSimilarity) to implement your own gallery solution in technology of your choice is appropriate for your use case.

Need to insert links to relevant API parts in text and/or as “See also” section.

I want to change the camera resolution in Unity application VisageTrackerDemo. Is this supported and how can I do this?

Depending on the platform it’s already possible, out of the box.

On Windows and Android, the camera resolution can be changed via Tracker object properties defaultCameraWidth and defaultCameraHeight within the Unity Editor. When the default value -1 is used, the resolution is set to 800 x 600 in the native VisageTrackerUnityPlugin.

On iOS it’s not possible to change the resolution out of the box from the demo application. The camera resolution is hard-coded to 480 x 600 within the native VisageTrackerUnityPlugin.

VisageTrackerUnityPlugin is provided with full source code within the package distribution.

I am getting EntryPointNotFoundException when attempting to run a Unity application in the Editor. Why does my Unity application does not work?

Make sure that you have followed all the steps from the documentation Building and running Unity application.

Need to insert links to relevant API parts in text and/or as “See also” section.

Verify that the build target and visage|SDK platform match. For example, running visage|SDK for Windows inside the Unity Editor and for a Standalone application will work since both are run on the same platform. Attempting to run visage|SDK for iOS inside the Unity Editor on a MacOS will output an error because iOS architecture does not match MacOS architecture.

  • No labels