Ian Hamilton, CTO, Signiant, looks at the increasing importance of live streaming events at remote locationsAuthor:
ContributorPublish date:
Aug 24, 2017Social count:
0
Ian Hamilton, CTO, Signiant, looks at the increasing importance of live streaming events at remote locations
0
SHARES
Live streaming of video from events taking place at remote locations, such as sporting events, has become an increasingly important piece of today's media and entertainment world.
Consumers want and expect to be able to see all the action, live, in high-quality video, wherever they are and on whatever device or platform they choose. Increasingly, watching live video streams delivered over the internet has become the normal experience for the motivated fan.
In order to meet this demand, broadcasters and other media rights holders of remote events have massively increased the amount of video content they produce on site, and then had to decide how to deliver it to the CDNs for live streaming to their audiences. Typically, there have been two models for the latter. Either the video is processed for real-time delivery to the CDN (often in a mezzanine quality) on site, or it is brought back in full broadcast quality to a permanent facility, where it is similarly processed and delivered live to the CDN. The at home production model has become particularly popular for some since it reduces the expense of the temporary infrastructure and staffing deployments needed each time they produce a remote event.
Each approach has its pros and cons, but in both cases, there is one common challenge: how to transfer live content from the remote location to the CDN without unacceptable delays or loss of quality, both of which impact the ability of the CDN to avoid passing on the same to the consumer. The challenge is primarily one of cost versus quality (in the broadest sense, meaning the entire end consumer experience when watching the live stream of the event). The cost here is the cost of delivery, which, when we are talking about transporting high quality live video over long distances, can be considerable. Since at a certain point quality becomes non-negotiable, particularly when there is a lot of money and a company's reputation at stake, the normal default position is to spend the money on the transport infrastructure to ensure that the desired quality video is reliably delivered to the CDN in a timely fashion.
Cost enters the picture because to achieve the kind of QoS content providers are looking for has historically required utilising purpose engineered networks and expensive, dedicated, satellite or fibre connectivity. And these connectivity costs scale linearly with the volume and bitrates of the video delivered over them both of which just grow ever larger as the demand for content increases, and as the resolutions of video continue to get higher, up to 4K and beyond. Such costs may be more readily absorbed by the big broadcasters covering major events with mass appeal, but they can quickly become a big hit on the budgets for smaller events, or for smaller companies working with far less resources.
An obvious alternative approach would be to take advantage of the economies of scale, the commodity-based infrastructure, and the open, standards-based, nature of the public internet and cloud technologies, and simply migrate the contribution side of the supply chain to these. This is of course how the OTT distribution side the live streaming to the consumer has been done from the outset, with tens of millions of videos streamed online every day. But there are well developed technologies for optimising the streaming of lower bitrate content, already cached on the CDN, to consumers. The real-time delivery of high quality video streams over the public internet on the contribution side, with its less forgiving QoS requirements, is a greater challenge and has remained a stubborn holdout.
The reason for this is due to limitations within the TCP layer that is built into the architecture of all TCP/IP networks, including the internet. These limitations, deriving from the way in which TCP works, are generally not a problem when sending relatively small data sets over the internet. But the reliability and congestion control mechanisms of TCP do not deal well with moving large data sets such as streaming high quality live video - over long distances, where high latency and packet loss can cause transfers to become slow, or fail entirely. Signiant, and others, have developed technologies to mitigate this problem for the asynchronous transfer of file-based video (or any other data type) content, whether to CDNs for VoD streaming, or for a multitude of other video production, contribution and distribution use cases. By eliminating packet loss and taking latency (i.e. distance primarily) out of the equation, we have developed file transfer solutions that fully utilise the available bandwidth and achieve the kind of speeds and reliability required for moving large files efficiently across long distances over the internet (or any TCP/IP network). Such solutions are in use by most major media and entertainment companies, and many smaller ones, all across the world.
More recently we have migrated this technology to the cloud a movement Signiant pioneered delivering the same accelerated file transfer capabilities as cloud-native SaaS solutions capable of moving huge data sets into and out of cloud storage quickly, reliably and securely without the need for expensive, dedicated connectivity.
At this year's NAB, and the most recent HPA Tech Retreats in LA and the UK, we demonstrated our next generation transport architecture that brings our acceleration technology to standard HTTP(S) transfers. Designed for a cloud-centric, standards-focused world, the recently patented scale-out architecture deliver










