By Gili Rom
VP Strategy & Alliances
BriefCam
Video surveillance has always played a crucial role in security and public safety, but the evolution of video content analytics technology has allowed organizations to realize even more value from their video investments. Video analytics make video searchable, actionable and quantifiable — empowering users to review, analyze and glean important insight from video content.
In recent years, a leading trend in the video analytics industry has been towards Deep Learning-enhanced performance, accuracy, and capability, which has significantly influenced the adoption of intelligent video solutions both for security surveillance users and operational stakeholders in a multitude of industries. As this evolution continues, it merges with another key digitalization trend: Edge Computing.
Enter Deep Learning-Enabled Video Analytics at the Edge
The emergence of AI-based edge analytics is a critical advancement that reduces the total cost of ownership of a video analytics system, enables operating in low-bandwidth environments, and increases real-time alerting response times.
Computing ‘on the edge’ means that information processing happens at or near the data source, rather than relying on centralized processing. With video analytics, edge computing can mean video processing on the camera or a local appliance, and it’s increasingly attracting the attention of enterprises and public safety providers for several reasons.
It can reduce latency, accelerate video processing, and provide greater security and value. By facilitating more efficient real-time processing, deep learning-enabled edge analytics in turn may reduce centralized processing hardware, storage, and bandwidth requirements, and allow for video analysis at lower cost and complexity. This is invaluable for those in need of object detection alerts or facial recognition in real time.
Real-Time Processing
There are an estimated one billion video surveillance cameras deployed worldwide. These devices are generating massive amounts of footage — so much that it would be physically impossible to actively monitor all of it to effectively identify suspicious objects or scenarios.
Similar to other forms of on-camera analytics, such as video motion detection (VMD), AI-based edge analytics processes the video in real time. Through detection, tracking and classification of objects captured by the device, AI-based on-camera analytics inherently enable singular, per-camera real-time alerts, so that organizations and public safety providers can quickly detect specific objects and motions, and classify them to support analytic capabilities, such as face or license plate recognition, line crossing detection, people counting and more.
While server-based video analytics solutions provide a more comprehensive solution, generating insights and trends from multiple cameras, AI-based edge analytics are a critical component of a comprehensive deployment model for video analytics. For some organizations and law enforcement agencies, a combined approach is ideal. Server-based video analytics improve post-event investigations and help derive actionable insights for data-driven safety and operational decision making holistically from multiple camera sources, while AI-based edge analytics enable and optimize real-time alerting efficiency.
Rethink Video Decoding
Collecting and processing video on centralized servers requires streaming volumes of video from the source to the server, which calls for network bandwidth and high-speed, highly available connections. Furthermore, in order to transmit the data to centralized processing servers or recording archives over an IP network, the video must be encoded and then decoded to be viewed or processed. This process is extremely work-intensive.
Collecting and processing video directly on AI edge computing devices allows organizations to conserve bandwidth and reduce the cost of maintaining on-premise infrastructure. Furthermore, once video footage is captured on AI-enabled edge devices and transferred as metadata to a centralized location, it has already been processed for real-time usage. Hence, video decoding for real-time processing is eliminated, saving on system resources and time.
The Future of Video Analytics
As the new generation of AI-based edge analytics is increasingly an attainable option for organizations, it will further democratize accurate, high performing real-time video analytics, making it more available, accessible, and usable.
As Deep Learning Processing Units (DLPUs) continue to improve and become available at a reduced cost, and as deep neural networks (DNNs) further optimize performance for those DLPUs, we shall see more advanced real-time AI-based video analysis applications delivered on the edge. Meanwhile, we can expect hybrid solutions that combine real-time processing on the edge and on servers, to fulfill the range of real-time, AI-based video analytic applications.
In the longer term, hybrid solutions will likely become the common model delivering a broad range of video analytics applications. Whereby robust forensic applications for video investigations as well as comprehensive video insights and trends will be deployed on servers – whether on-premise, in a data center, or cloud-hosted – and real-time, AI-based analytics will continue to develop at the edge.