The last decade has seen huge advances in artificial intelligence, smart devices and video analytics. The next will see a dramatic increase in the devices built from them. In fact, demand will be so high that we need to start thinking about our capacity to deliver them.
One bottleneck is the networks over which we expect them to connect. As 5G rolls out, 4G is still patchy outside urban areas and the capacity of our networks to carry 5G traffic has been questioned. Its rollout was also somewhat muted by attacks on phone masts by protesters.
Data centres are also feeling the strain. As more companies, individuals and devices link to Cloud services, data centres have to increase capacity, but noise abatement and heat dissipation make expanding or finding new sites a challenge.
The irony is that only a few emerging technologies need an explosively growing network; demand seems to be driven by people rather than machines. Follow any link to a 4G or 5G website and you quickly discover the benefit of being able to download a 2hr movie in 10 seconds. A strange boast considering that almost everyone now streams, not downloads, movies (and we can’t help wondering why they need them on the move).
By comparison, a smart meter reports your gas and electricity usage about six times per day, taking about 3 seconds in total. Smart meters also use data maintaining their network but that only raises their usage to about 1 minute.
Only a few devices need to transmit more than a few kilobytes of information per hour, nothing comparable to a movie download. Visual feeds from cameras are heavier on bandwidth, but how many hours of CCTV footage of empty buildings do we really need to collect on central servers?
The IoT is a outstanding medium for data gathering and remote control; the Cloud is ideal for data storage and leasing advanced applications, but the most exciting frontier is the development of autonomous systems. When we can store sophisticated algorithms on a chip, smart devices are not only less dependent on human management, but also less dependent on networks. Problems such as communication interruption, bandwidth overload, and response latency begin to disappear.
The obvious example is the self-driving car. Not only are they heavily dependent on advanced image recognition but must perform it at a blistering speed. If they had to depend on a remote server for their analytics, they could never match the response times of human drivers. There are several other reasons for providing self-driving cars with a connection (traffic information for example) but the visual analytics that enable it to drive have to be local.
Video feeds are also a heavy load on human observers. CCTV security systems will be more effective when the equipment itself can identify salient events. In fact, the raison d'être for driverless cars is to improve on the situational awareness and sluggish responses of tired human drivers.
Cloud (or other network) dependence is the weak link in many IoT deployments, impairing its speed and reliability. The alternative is to distribute the processing workload close to the edge of the network - near the device. This is often called “Edge computing”.
Rapid situational awareness can often be achieved by incorporating AI or video recognition algorithms onto the device itself, or supplying them in a specialised processing unit in close proximity. This infrastructure can still work in symbiosis with distant resources and control systems, but the bulk of the processing is shifted as close as possible to where it is immediately needed.
In the next few years, real-time information response capabilities will find a multitude of new niches and transform existing ones. For example, video surveillance has been booming for years (in retailing, transport and security systems), but re-establishing those systems on edge architectures will transform their value by making the intelligence they collect actionable.
Knowing which bus ran you over might be useful in an inquest, but we would rather be warned that the bus is coming. Or consider the difference between scouring a police officer’s bodycam footage to see who fired at them, with a system that can recognise a gun and issue a warning that saves their life.
Ideal solutions will often be hybrid. Many systems can learn to recognise faces locally, for example at ATMs and robotic checkouts, yet they can still liaise with central repositories when needed.
Fully autonomous robots are no longer far-fetched, but in the meantime let Net 4 show you how to future proof your video processing systems.