BriefCam goes straight to the important stuff.
No more looking through many hours of footage. We present all the relevant
moments, in a single view. From there, users can dig into the details.
No one has the time to review hours of video footage. We present all the relevant moments, in a single view. From there, users can dig into the details.
BriefCam’s professional users have used this technology to find bombing suspects within days of an incident, out of thousands of hours of footage from diverse camera feeds. This same accuracy is now available to your home and business customers to make their own video footage more useful, usable and relevant.
All of these functions, and many more, make BriefCam the best engine for object detection around. In fact, this engine is at the core of object detection for law enforcement, public safety and private security organizations, the world over.
Proactive rapid video review to “know what you didn’t know before”
One-click export of video clips to email or social media.
Graphic visualization of data analysis (heat maps, pathways, bar graphs, etc.)
Advanced computer vision capabilities
At the core of BriefCam is a highly refined engine developed by a team that’s headed by one of the world’s leading computer vision and machine learning experts.
As we process video, we recognize and extract objects, along with information about those objects, such as color, direction, dwell time, size, path, speed, and more.
This may sound easy, but the ability to extract, isolate and differentiate between independent objects, especially if the scene contains small or distant objects, poor illumination, background distractions or high activity, is difficult.
In order to accomplish this, our R&D team has solved hundreds of computer vision challenges. The result is a complete, integrated solution for our partners.
Filtering out noise to know what’s important.
To limit the false detection of new objects in a frame, we’ve learned how to filter out the movements in the environment – such as branches, shadows, reflections, waves and clouds.
Handling complex lighting conditions on the fly.
Every scene is different- we’ve developed the capacity over the years to handle thousands of scenes… lighting conditions and weather conditions for example (you’d be surprised how snow and rain impacts the accuracy of computer vision!).
Detecting subtle and camouflaged objects.
We can identify very small subtle objects, down to the level of small creature. and, while an object in the frame that is appearing against a background of the same color is a difficult task – over the years, our detection has become very sensitive to subtle differences in color and texture.
Knowing what goes together to create a single incident.
If a person “disappears” from the frame for a period of several seconds, as they pass behind a car or a tree, we detect that and treat it as one incident. If the person goes in and out of the frame boundaries, we treat it as separate object incidents.
Extended capabilities through third party metadata.
We connect video time stamps to metadata from other systems to create tight custom integrations. For example, if you have an ID badge system in your small business application, we can pair that information to video. The same goes with audio feeds, motion sensors, smoke detectors and more. Not only do you have the alert from the third party feed, but the video of what happened at that exact moment, along with more efficient presentation and rich query formulation capabilities.
How BriefCam Works
Video footage from any static camera is sent to BriefCam, in a secure cloud environment. We can use our cloud, or attach to yours.
BriefCam separates the dynamic, moving objects from static background. Through a combination of complex algorithms, we determine the boundary of relevant distinct objects on the screen.
These separated objects are placed into a database, along with metadata, such as color, size, date, time and duration. Adding object recognition for automatic tagging to generate reports and answer detailed queries.
Add features, such as object recognition processing, to enable semantic queries. Imagine being able to filter objects in any videos by descriptors such as “Show me all the blue cars passing by” — and go directly to that point in the video.
Present single view
This extraction allows us to present all the objects that passed through that frame in the day in a single frame.