The current video streaming wars are well documented. As US tech giants join the fray to compete for viewers, streaming companies must now seek out competitive advantages in an increasingly narrow market.
What is less known, is the hidden technological arms race underpinning the headline-grabbing streaming sector that could hold a key advantage.
With consumers increasingly investing in higher quality video content, streaming companies are faced with the issue of how to balance speed versus quality, while also locked in a battle to develop more efficient video compression software to gain the edge on their rivals.
Video compression is increasingly a hot-button issue for the streaming giants. While the average bandwidth of network connections (rate of data transfer) is increasing over time, so too are the number of network users and their demands for content.
This is happening in an increasingly asymmetric way with demand growing more quickly than our internet speeds can support.
With the rapid rise of video-on-demand channels across the globe and the increase in live streaming over the web, internet connections may struggle to cope.
Only a year ago, Netflix and YouTube alone were claiming 25% of the entire internet traffic, and this percentage is poised to continue rising rapidly as new challenger companies, such as Disney, Apple and Comcast, enter the space.
Even with a fast broadband connection, independent testing has shown that typical video streaming services will struggle to stream at faster speeds. The issue lies in balancing quality with data usage.
Beyond the importance of making video compression more efficient, alongside user demand, significant challenges remain in the development of better video encoders and decoders, which use enormous amounts of data.
This is where AI technologies must seek to improve the balance between data usage and high-quality video streaming. Moreover, the energy-intensive demand for streaming is also having an untold environmental impact on overburdened data centres.
The confluence of these challenges means that wireless bandwidth will continue to remain one of the most important issues, and the delivery of high-quality video in livestreaming of video-on-demand scenarios will continue to clog data centres and wireless and IP connections.
With competitors developing rival software to meet this challenge, the Alliance for Open Media (AOMedia) is seeking to collaboratively develop royalty-free standards for video coding. ISO MPEG, for instance, has recently adopted a commitment to make some of its video encoding standards royalty free.
It remains to be seen, however, if this approach will encourage wider sector developments, or if it will limit innovation to a small number of companies vested deeply enough to pay for the research and development so they can use the technology in the future without royalties.
Competing approaches in video delivery that go beyond current standards can be grouped into roughly three camps:
The first type of approach is device-based enhancement solutions. While there are certain promising advances in this domain, this category of solutions is limited by the complexity and power consumption potential of consumer electronics.
A second family of approaches consists of companies developing their own bespoke image and video encoders, typically based on deep neural networks. This requires bespoke transport mechanisms and decoders, presenting a risky proposition for mainstream video encoding services.
The third family of methods comprises optimisation of existing encoders by using perceptual metrics.
The challenges include: i) the constraint of complying with the utilised standard; (ii) many of the proposed solutions tend to be limited to focus-of-attention models or shallow learning methods; (iii) such methods tend to require multiple encoding passes, thereby increasing complexity.
Because of these challenges, designs for a new encoder and/or new standard require substantial effort.
Approaches that compress video files but have significant deployment cost and do not fit with existing standards often come with associated risks. On the other hand, approaches that provide bandwidth savings on top of existing standards can provide immediate value, especially if the savings are shown to be significant.
This is where AI can present significant advantages for businesses and investors. For instance, iSize Technologies applies its processing before the pixel content hits any standard video encoder, in a framework known as deep video precoding, which provides higher quality video for less data usage.
AI used in this way has the potential to provide compression solutions to meet growing video demand, offering streaming companies significant advantages in the war for the attention of viewers.
It is not just the tech giants who could reap the rewards. Efficient video compression software can also provide untold savings for energy consumption and cost, therefore reducing the huge environmental impact of data centres.
Given the enormity of the burden our viewing habits are placing on the Internet, the value of such technologies should not be underestimated.
Sergio Grce is CEO at iSize Technologies.
The Hidden Technological Arms Race That Could Break The Streaming Deadlock