BriefCam helps you go straight to what matters.
Our best-of-breed Video Synopsis® technology helps everyone, from law enforcement, government and public security organization to private security and corporate entities, to extract value and actionable information from video, and secure their environments.
Video Synopsis® for rapid video review, search and analysis is the simultaneous presentation of events that occurred at different times. BriefCam Syndex® offers a powerful set of video review tools for locating events of interest so that users can reach targets more quickly than ever before.
Advanced computer vision capabilities
At the core of BriefCam is a highly refined engine developed by a team that’s headed by one of the world’s leading computer vision and machine learning experts.
As we process video, we recognize and extract objects, along with information about those objects, such as color, direction, dwell time, size, path, speed, and more.
This may sound easy, but the ability to extract, isolate and differentiate between independent objects, especially if the scene contains small or distant objects, poor illumination, background distractions or high activity, is difficult.
In order to accomplish this, our R&D team has solved hundreds of computer vision challenges. The result is a complete, integrated solution for our partners.
Filtering out noise to know what’s important.
To limit the false detection of new objects in a frame, we’ve learned how to filter out the movements in the environment – such as branches, shadows, reflections, waves and clouds.
Handling complex lighting conditions on the fly.
Every scene is different- we’ve developed the capacity over the years to handle thousands of scenes… lighting conditions and weather conditions for example (you’d be surprised how snow and rain impacts the accuracy of computer vision!).
Detecting subtle and camouflaged objects.
We can identify very small subtle objects, down to the level of small creature. and, while an object in the frame that is appearing against a background of the same color is a difficult task – over the years, our detection has become very sensitive to subtle differences in color and texture.
Knowing what goes together to create a single incident.
If a person “disappears” from the frame for a period of several seconds, as they pass behind a car or a tree, we detect that and treat it as one incident. If the person goes in and out of the frame boundaries, we treat it as separate object incidents.
Additional computer vision & machine learning capabilities in development.
BriefCam’s labs are always working on new features. For example, we’re adding the automatic ability to classify and refine query results by object types, such as people, vehicles, packages, etc.
Users will be able to submit queries such as “show me trucks” or “show me people wearing red.”
Every new feature creates superb differentiated value, unique to each partner’s needs.
Extended capabilities through third party metadata.
We connect video time stamps to metadata from other systems to create tight custom integrations. For example, if you have an ID badge system in your environment, we can pair that information to video. The same goes with audio feeds, motion sensors, smoke detectors and more. Not only do you have the alert from the third party feed, but the video of what happened at that exact moment, along with more efficient presentation and rich query formulation capabilities.
How BriefCam Works
Video footage from any static camera is sent to BriefCam, in a secure cloud environment. We can use our cloud, or attach to yours.
BriefCam separates the dynamic, moving objects from static background. Through a combination of complex algorithms, we determine the boundary of relevant distinct objects on the screen.
These separated objects are placed into a database, along with metadata, such as color, size, date, time and duration. Adding object recognition for automatic tagging to generate reports and answer detailed queries.
Add features, such as object recognition processing, to enable semantic queries. Imagine being able to filter objects in any videos by descriptors such as “Show me all the blue cars passing by” — and go directly to that point in the video.
Present single view
This extraction allows us to present all the objects that passed through that frame in the day in a single frame.
|Number of cameras||NA||NA||50-200||100-unlimited|
|Number of clients||1||5+||2||2+|
|Standalone (no integration needed)||x||x|
|Accepts video from a wide range of sources and formats||x|
|Video synopsis embedded in leading VMS brands||x||x|
|Case management / permissions for teams||x||x|
|Accommodates centrally connected or offline clients||x||x||x|
|Highly scalable (Large teams, multiple cases, large environments, DB capacity, cameras, users, servers)||x||x|
|Control object attributes (e.g., size, direction, speed, color)|
|Hours of video reviewed in minutes|
|1 click event selection indexes back to the original video|
|1 click export of Video Synopsis or original video|
|Bookmarks and annotate objects for team collaboration|
|Areas of Interest|
|Areas of Exclusion|
|Easy to install and Operate|
|Discover previously unreported events|
|Get better evidence faster|
|Reduce manpower time and costs|
|Export and share investigation information|
|Object attribute control reduces review time|
|Integrates user's experience and intuition|