Page tree

Versions Compared


  • This line was added.
  • This line was removed.
  • Formatting was changed.


visage|SDK never automatically stores any user information locally, .

In summary, you as the application developer have full control and responsibility for any storage or transfer of user data that you may implement in your application.


Our sample projects show how to access local cameras and we currently have no ready-to-go code for accessing IP cameras. There are various ways to grab images from IP cameras, for example by using libVLC.
Usually, IP cameras should provide a URL from which you can access the raw camera feed and if . If you can open the URL and see the feed in VideoLAN VLC Media Player, the same URL can be used in libVLC to access the feed programmatically.


The mentioned camera parameters (1920 x 1080, 30 fps, mono) should be appropriate for our software and also for most of the use cases.


This text requires review after the introduction of new face detector in V8.7b2. Also, needs to be reviewed.

In general, visage|SDK works with a very wide range of camera resolutions and can provide sustainable tracking on faces as small as 30×30 pixels. For further details on inputs and other features specifically for face tracking, please see

What should be the camera Field of View (FoV)?

Regarding the FoV, I think that the selection should primarily depend on the use case (planned position of the camera and the captured scene). Our algorithm should be robust enough to handle some optical distortions that may be a consequence of lenses with large FoV. However, however extreme distortions (e.g. fish-eye lens) will negatively affect the algorithm's performance.


There is no ready-made application to do this, but it can be achieved by modifying the existing sample projects. Such modification should be simple to do for any software developer, based on the documentation delivered as part of visage|SDK. We provide some instructions here.

To get you started, each platform specific visage|SDK package contains sample projects with full source code that can be modified for that purpose. In the documentation, click on Samples to find the documentation of the specific sample projects.

On Windows, there are Visual Studio 2015 samples: ShowCaseDemo which is written in C# and FaceTracker2 which is written in C++.

On macOS, there is Xcode sample: VisageTrackerDemo which is written in Objective-C++.

Processing and saving information to a file can be implemented in parts of the sample projects where tracking from video file is performed:


Please search source code of each sample for the exact location.

Important note: sample projects have “video_file_sync“ (or similarly named functionality) enabled which skips video frames if tracking is slower than real-time. This functionality should be disabled for full video processing i.e processing of all video frames.


The tracking distance depends on the camera resolution. For the face to be detected and tracked, it should be at least 30 pixels wide in the image. For example, for a webcam with resolution 1920×1080 (Logitech C920), the tracker can be configured to detect and track faces up to ~7.25 meters from the camera (with performance tradeoff).