OTTAWA-Trend alert: Artificial intelligence/machine learning (AI/ML) is becoming an integral part of the total TV production/playout process.AI/ML is shifting to provide tremendous value to broadcasters and content producers, said Amro Shihadah, IdenTV's Co-Founder & Chief Operating Officer for IdenTV, a McLean, Va.-based real time video analysis market researcher. AI/ML is achieving this by transforming big data from a cost center and opaque set of structured/unstructured datasets into real-time actionable analytics and tools for big data search and recall, creating a better user experience, and generating revenue from new content distribution channels.
Broadcast consultant Gary Olson, who has just released the second version of his book, Planning and Designing the IP Broadcast Facility-A New Puzzle To Solve,'' says the technology is already showing up in elements of the production chain and is expected to expand its footprint.
I see AI/ML appearing in editing, graphics and media management products in 2020, Olson said. As the year progresses, it will be interesting to see which vendors will claim their products have AI or ML.
CONTENT DISCOVERYMany major broadcasters and TV studios have vast libraries ripe for direct-to-consumer online sales. The challenge lies in determining which of these programs will appeal to modern consumers and for what reasons, without using employees to watch all of them in real-time.
Prime Focus Technologies' CLEAR Vision Cloud has a cloud-based AI engine that can do this work across a number of search variables, and in record time, according to the company.
There could be one AI engine that looks at identifying faces in the video, said Muralidhar Sridhar, vice president of AI and Machine Learning for PFT. Another one may look at signature sounds of, let's say, a person splashing through water,' while a third searches for distinct objects. Best yet, what would take humans hours to achieve looking at a piece of content can be done by our AI in real time.
Alan Dabul, director of product development for Primestream (Image credit: Primestream)Primestream's xChange platform uses AI/ML to power its content discovery tools, providing a wide range of search options in the process, according to Alan Dabul, director of product development for Primestream.
You can narrow the search down not just to President Trump, but to those specific clips where he is talking about taxes, he said. You can then narrow the search further to those times when he is speaking about taxes in an office setting, and then see who is with the president in the shot at that time.
SPORTS AND LIVE EVENTSSports and other live events are among the most labor-intensive productions for broadcasters, given how much content has to be created on the fly. Tedial's SMARTLIVE metadata engine uses AI/ML to automate media management tasks associated with these productions; including metadata tagging, automatic clip creation and distribution during live events to digital platforms and social media. SMARTLIVE can also manage multivenue feeds and support multiple, instantaneous content searches to integrate archival footage into live broadcasts.
SMARTLIVE allows the production team to create more content leading to increased fan engagement and additional revenue, using the same budget and with the same team, said Jerome Wauthoz, vice president of products for Tedial. SMARTLIVE also connects directly to existing production environments so our customers can use their current infrastructure to ingest, edit and deliver content; no additional investment is necessary.
CAPTIONING AND TRANSLATIONS
Another labor-intensive area where AI/ML is gaining traction is multilingual captioning. Using speech-to-test AI systems, vendors can automatically generate text captions from the content's audio, and provision them in a range of languages within the same data stream.
The algorithms are trained to learn from data in real-time, absorbing local terms and dialects for the optimal captioning experience, said Brandon Sullivan, senior offering manager for IBM Watson Media. As AI and machine learning training capabilities improve, local dialects, places and specific names, as well as the voices of individual speakers, will all be accurately captured. Down the road, this will not only transform closed captioning but also automated translation, video indexing, and more.
Captioning and lip sync are two of the AI/ML technologies featured as part of Interra Systems' BATON, a video QC platform. With AI/ML, you can improve the accuracy and speed of captioning, which is a resource-intensive, time-consuming process, said Anupama Anantharaman, vice president of product management for the Silicon Valley-based provider of video QC and monitoring technology. It is also particularly effective at detecting lip sync'; the alignment between the movement of lips onscreen and what is being said.
Telestream's Telestream Cloud includes captioning as its many cloud-based AI/ML-enabled offerings; the others being video transcoding for multiple delivery platforms and quality/compliance checks, according to Remi Fourreau, cloud product manager for the company.
We use the speech-to-text capabilities of many cloud-based providers to generate accurate captions and subtitles in many languages, Fourreau said. This is an area where AI/ML really shines in doing the task accurately and efficiently.
ENCO's enCaption4 platform provides automated closed captioning for live and pre-recorded TV content in real-time, and combines AI-driven machine learning with a neural-network speech-to-text engine. In addition to newsroom rundown imports that teach unique words via AI, enCaption4 can be taught special words such as host and cast names, and local and regional terms. Oth










