Expanding on the article in TVBEuropes August issue, about how BBC R&D used the Eurovision Song Contest as a testing ground for next generation audio, BBC R&D's Matt Firth explains some of the challenges, and where we might go nextBy Kevin Emmott
Published: August 10, 2023
Expanding on the article in TVBEurope's August issue, about how BBC R&D used the Eurovision Song Contest as a testing ground for next generation audio, BBC R&D's Matt Firth explains some of the challenges, and where we might go next
target=_blank title=Share on LinkedIn class=share-linkedin>
There's been a quiet revolution in television sound, and it's all about the objects.
Object-Based Audio (OBA) doesn't just open doors for people to experience audio content in new ways; it kicks them down and sounds amazing while doing so.
It gives content producers more freedom to generate a range of audio experiences for their customers, and earlier this year the Eurovision Song Contest in Liverpool marked a milestone in the way it can be produced and delivered.
OBA works by encoding audio objects with metadata to describe what it is and how it contributes to a mix. An audio object can be anything; it can be an instrument, a commentator, or a crowd mix. It enables content providers to deliver personalised and immersive listening experiences to customers - also known as Next Generation Audio (NGA) - and a receiver in the consumer's equipment decodes the metadata to ensure the mix is rendered as the producer intended it to be.
The Audio Definition Model (ADM) was published in 2014 by the EBU and became an ITU Recommendation in 2015 (ITU-R BS.2076). It is an open standard that all content producers can use to describe NGA content. As an open model, it can be interpreted by both Dolby's AC-4 and Fraunhofer's MPEG-H, and anyone else who wants a piece of the action as encoders evolve.
File-based ADM is reasonably well adopted now, says Matt Firth, project R&D engineer at BBC R&D in Salford, UK, and part of the Eurovision S-ADM trials. ADM is designed to be an agnostic common ground between NGA technologies to allow programme makers to produce something in ADM and not worry about the format that the content needs to be in for a particular emission route, whether that is AC-4 or MPEG H. If your content is in ADM, you can feed that into the encoder and it's the encoder's job to put it into whatever propriety format it supports and deliver it.
If it is so well adopted, you might ask, why can't I already personalise what I hear at home and just listen to just the football crowd if I want to?
Well, for live content we're not there yet. ADM doesn't work in real-time workflows like live broadcast, which requires a frame-based version of ADM.
Serial ADM, or S-ADM, is exactly that. Established in 2019 and standardised in ITU-R BS.2125, S-ADM provides metadata in time-delimited chunks and delivers them in sequence alongside the audio. In other words, it enables use of ADM metadata in real-time applications.
While companies such as Dolby have been experimenting with S-ADM, there are still gaps to fill and workflows to develop, and the Eurovision trial was a massive step forward. As the first-ever BBC trial using S-ADM, it allowed Firth and his BBC colleagues to build new workflows, identify gaps, and glue the broadcast infrastructure together.
It required the development of software to fill those gaps in the chain, but as we knew exactly what feeds we were getting, as well as the ADM structure that the emission encoder required, we were able to quickly create the software to do that conversion.
The topology of BBC R&D's S-ADM trial at Eurovision The BBC's listening room in Salford received 62 channels from Liverpool over MADI, including music, presenters, FX and a variety of audience microphone feeds. The production was handled by L-Acoustic's L-ISA Controller software, and the L-ISA Processor software was used for local monitoring.
The L-ISA Controller allows a sound producer to set position and gain parameters for each of the input audio channels from a straightforward user interface. L-ISA supports ADM-OSC, which is a method of conveying ADM metadata using the Open Sound Control protocol.
It presents a hierarchical path to a control or a parameter and defines a value for it. It's really simple, making it really easy to implement. In the Eurovision project, we used ADM-OSC to refer to a particular audio object, and then to a parameter for that audio object, such as 3D position or gain. It makes it a nice solution for production controllers like L-ISA, and as long as the receiving software is also speaking ADM-OSC, it is easy to interoperate.
To test interoperability, Firth adapted BBC R&D's Live Binaural Production Tool originally developed for the BBC Proms to output ADM-OSC as an alternative production controller. He also built an alternative monitoring solution based around the EBU ADM Renderer (EAR) which also supports ADM-OSC; both controllers were able to function with either monitoring solution.
In addition, both the L-ISA Controller and the alternative production controller were able to connect to an S-ADM generator which BBC R&D built to convert the ADM-OSC to S-ADM. Using ADM-OSC to generate S-ADM in a live production broadcast chain was a world first for the team and another cause for celebration; it proved that any software which can generate ADM-OSC can be used in the production set-up.
We also wanted to make sure that different implementations of ADM can talk to each other. For example, our own EAR Production Suite software uses a very unconstrained version of ADM known as the production profile which is very flexible in the number of audio channels you can use, similar to the ADM produced from the production










