BriefCam goes straight to the important stuff.

No more looking through many hours of footage. We present all the relevant
moments, in a single view. From there, users can dig into the details.

No one has the time to review hours of video footage. We present all the relevant moments, in a single view. From there, users can dig into the details.
BriefCam’s professional users have used this technology to find bombing suspects within days of an incident, out of thousands of hours of footage from diverse camera feeds. This same accuracy is now available to your home and business customers to make their own video footage more useful, usable and relevant.

All of these functions, and many more, make BriefCam the best engine for object detection around. In fact, this engine is at the core of object detection for law enforcement, public safety and private security organizations, the world over.

See How BriefCam Works

Features

For your customers, BriefCam enables:

Quick review

This is not fast forward: it’s all in one rapid video review.

Deep drill down

Search the day in minutes. Select events and access the original video in a single click.

Alert configuration

Sends notification of unusual activity.

advanced-schedule

Advance scheduling

Proactive rapid video review to “know what you didn’t know before”

share-clips

Share clips

One-click export of video clips to email or social media.

Gain insights

Graphic visualization of data analysis (heat maps, pathways, bar graphs, etc.)

BriefCam’s Core Technology

Advanced computer vision capabilities

At the core of BriefCam is a highly refined engine developed by a team that’s headed by one of the world’s leading computer vision and machine learning experts.

As we process video, we recognize and extract objects, along with information about those objects, such as color, direction, dwell time, size, path, speed, and more.

This may sound easy, but the ability to extract, isolate and differentiate between independent objects, especially if the scene contains small or distant objects, poor illumination, background distractions or high activity, is difficult.

In order to accomplish this, our R&D team has solved hundreds of computer vision challenges. The result is a complete, integrated solution for our partners.

Technology refinement examples include:

filter-icon

Filtering out noise to know what’s important.

To limit the false detection of new objects in a frame, we’ve learned how to filter out the movements in the environment – such as branches, shadows, reflections, waves and clouds.

complext-lighting-icon

Handling complex lighting conditions on the fly.

Every scene is different- we’ve developed the capacity over the years to handle thousands of scenes… lighting conditions and weather conditions for example (you’d be surprised how snow and rain impacts the accuracy of computer vision!).

complext-lighting-icon

Detecting subtle and camouflaged objects.

We can identify very small subtle objects, down to the level of small creature. and, while an object in the frame that is appearing against a background of the same color is a difficult task – over the years, our detection has become very sensitive to subtle differences in color and texture.

complext-lighting-icon

Knowing what goes together to create a single incident.

If a person “disappears” from the frame for a period of several seconds, as they pass behind a car or a tree, we detect that and treat it as one incident. If the person goes in and out of the frame boundaries, we treat it as separate object incidents.

Additional computer vision & machine learning capabilities in development.

BriefCam’s labs are always working on new features. For example, we’re adding the automatic ability to classify and refine query results by object types, such as people, vehicles, packages, etc.

Users will be able to submit queries such as “show me trucks” or “show me people wearing red.”

Every new feature creates superb differentiated value, unique to each partner’s needs.

extended-capabilities

additional-machine

Extended capabilities through third party metadata.

We connect video time stamps to metadata from other systems to create tight custom integrations. For example, if you have an ID badge system in your small business application, we can pair that information to video. The same goes with audio feeds, motion sensors, smoke detectors and more. Not only do you have the alert from the third party feed, but the video of what happened at that exact moment, along with more efficient presentation and rich query formulation capabilities.

additional-machine

How BriefCam Works

Upload Video

Video footage from any static camera is sent to BriefCam, in a secure cloud environment. We can use our cloud, or attach to yours.


Extract Objects

BriefCam separates the dynamic, moving objects from static background. Through a combination of complex algorithms, we determine the boundary of relevant distinct objects on the screen.


Add Metadata

These separated objects are placed into a database, along with metadata, such as color, size, date, time and duration. Adding object recognition for automatic tagging to generate reports and answer detailed queries.


Add Features

Add features, such as object recognition processing, to enable semantic queries. Imagine being able to filter objects in any videos by descriptors such as “Show me all the blue cars passing by” — and go directly to that point in the video.


Present single view

This extraction allows us to present all the objects that passed through that frame in the day in a single frame.

Architecture

Video Library

Find out more about BriefCam’s consumer market solutions

Contact us to add a customized version of BriefCam to your video offering

Looking for BriefCam’s security solutions?