
Monday, April 14, 2025 - 12:44 pm
Print This Story | Subscribe
Story Highlights
Not everything at NAB 2025 was all about tariffs. Artificial intelligence (AI), both as a concept and as a reality, was also a major topic of conversation.
Telos Alliance used the show to debut the AI-driven features added to its Minnetonka Audio AudioTools Server file-based audio-automation platform. The new features are part of a major software update for AudioTools Server, ATS V7, which is intended to take advantage of the personalization features of Next Generation Audio, will introduce the ability to measure dialog intelligibility (a top audio-related complaint), and is scheduled for release later this year.
Minnetonka Audio AudioTools Server
Intelligibility is a huge issue going forward, especially with smaller screens, explains Marty Sacks, EVP, sales, marketing and strategy, Telos Alliance. The other thing we are very excited about is helping people that are creating content across multiple languages to be able to figure out how to assign the right content to the right channel, part of a partnership that we have with Fraunhofer Institute, using algorithms they've developed.
The ultimate goal, he says, is to be able to apply these kinds of processes to live production, such as sports.
Lawo's algorithmically driven Kick ball-tracking technology presaged much of what is now expected of AI, and Lawo CMO Andreas Hilmer suggests that AI's future in broadcasting might be more in the background than behind the mix console, where some personnel-challenged broadcasters have been hoping it can pick up the slack.
I think the major impact first will be more on the infrastructure side, he says. Managing infrastructure, optimizing the use of resources, how to manage your network, knowing about what potential failures could come up from experience over machine learning - these types of things, I think, are where we will have first impact [for AI]. There will be gadgets, no question, but will they really change the workflows? I don't think so at the moment.
Clear-Com VP, Product Management, Dave McKinnon is equally skeptical that AI might be the revolutionary game-changer it was touted to be even a year ago. AI is definitely something we are considering. I spent 12 years at NBCUniversal on the production side, and I always hated gimmicky things, whether it was 3D television or even some of the 8K buzz. We are going to implement AI in a very thoughtful way that's not a gimmick. We have some things here that we're showing in terms of speech-to-text and other applications.
But, he continues, I've told my team I want to bring AI in but we want to do it in a way that's constructive to our users and our use cases, not just so we can go stamp on the box that it has AI.
Riedel Smart Audio and Mixing Engine (SAME)
AI may be a bit of a victim of its own hype, but automated audio systems are definitely getting smarter. Riedel's Smart Audio and Mixing Engine (SAME) was an example at the show. Introduced last year, SAME isn't tech for tech's sake but rather a workflow enhancement. With a suite of 30-plus advanced audio-processing tools and mixers - ranging from automatic leveling and dynamic equalization to 5.1 upmixing, loudness meters, and signal analyzers - SAME caters to such applications as voiceover, automated mixing, audio monitoring, and in-line process insertion.
It fills a need for more automation but not necessarily using AI, says Roger Heiniger, senior product manager, Riedel Communications. SAME will open up a whole world of new workflows for the audio-production industry.
A demo illustrated how a single A1 can remotely supervise and monitor the work of multiple mixers. What's important, he says, is that the UI can also be adapted to the submixers' level of competence or to a level allowing the talent themselves to manage their own audio.
Data Makes the Difference RTS NOMAD wireless intercom solution
AI has a bigger future than many may imagine, though. RTS Intercom Systems rolled out the NOMAD Wireless Intercom and RVOC Hybrid Cloud Solution new for NAB 2025. Musing about intercoms in the AI era, including AI-enabled language translation, Mike Keiffer, director, project management, RTS, notes that, typically, you have a big sports event where you have commentators sitting everywhere, [speaking] in every language. Now you can have one [commentator] and speak 20 languages fully automatic with a delay of milliseconds.
Company research, he adds, now involves extensive data modeling to better understand how AI-enabled systems make connections. We have to have something to predict the model and to route [signals], he explains. It's coming. It's probably two years out, but, with the cloud that we're doing already, we can start to do some of the data analytics now. The beauty of it is, once you get it into the cloud, you can start to use the data, see how the users are using it, and we can start to migrate into the feature sets that the customers need.
Audinate CMO Josh Rush notes all the data that digital networks are producing and accumulating, creating their own pain points. Audinate's Dante format may be able to help with that in the future, using AI. Dante is sitting on top of a lot of data, he explains. We have information on the devices, we have information on the network utilization, and on all the tools that we're showing here for control and management. We're just doing it from a reactive component today: if a device drops off the network, we can let you know about that.
But what we're working on in the labs, he continues, is how we get more predictive ab