Vision and hearing are your main senses when experiencing a movie. You recognize the actors, you understand the spoken language even if it is not your native language. You follow the story and enjoy the amazing film photo of the different environments - and by sharing all these experiences you can not only relate to the movie itself but also convince your friend to see the movie. Wouldn't it be great if your Media Asset Management system could possess similar capabilities when managing your media? Being able to understand the language? Recognize actors, detect and define parts of the image - maybe also differentiate between genres? But how?
In order to do this you need a system that can actually see and listen what's inside your media - you need a system that has cognitive capabilities like yourself and can store that info in - yes, you guessed right - metadata.
But how do we navigate our vastly growing archives of file-based media?
Media files themselves today includes a lot of metadata already in a descriptive format. In here, there is room for all general metadata as well as technical metadata describing the actual file structures. MAM (Media Asset Management) systems make use of this existing metadata along with additional layers of metadata frameworks to help you navigate, find and tag not only media files themselves but also the time-based intervals of the media.
Because of this, you can argue that the true definition of a media file must include an audio-visual asset AND an associated metadata description. Without one or the other - the asset is not complete.
Cognitive Metadata to boldly go where no MAM has gone before Traditionally, the common notion is that while a machine can read and act on the associated text-based metadata of a media file, a human can understand the storyline. We can detect lipsync, recognize actors, emotions, and all the visual objects inside a frame. We can also listen to the language spoken, understand the story and do a translation into a new language.
Because of this common view on the differences between machine capabilities and human capabilities, it is still also quite common that production companies and similar, divide many tasks in a media supply chain between man and machine this way.
But times are changing, and they are changing fast. For any Content Owner, CTO or technical strategist building a modern media workflow, it is vital to challenge this traditional view on what machines can and cannot do.
Interview with Ralf Jansen Product Manager and Software Architect at Arvato / Vidispine To find out more on this subject, we talked to Ralf Jansen, Product Manager and Software Architect at Arvato / Vidispine AB. Ralf Jansen has a strong technical background, finished computer science degree with a Thesis Diploma at Fraunhofer Institute and has since worked as a developer and software architect in the industry for nearly the last 20 years. Today Ralf Jansen is managing the development of the new Vidinet Cognitive Services (VCS) and is part of the Vidinet partner success team.
So, Ralf, why is cognitive services important? Cognitive services allow the machine to find information inside the video and audio frame itself, very much like we humans can interpret the same content. This of course opens up important new possibilities depending on what type of workflow you are managing. A channel distributor can use cognitive services to automatically find (new) types of information in a huge amount of media content that could not be processed manually before - and thus use or present that insights to the viewer as a program, highlights, suggested shows or even as autogenerated trailers. Cognitive services carry this new information as metadata and give your MAM system new and much more granular methods of managing your media files. This is very important in the process of optimizing the performance and capabilities of your evolving media supply chain.
Revenue and how we can improve revenue are, of course, a driver for the advancement and adaption of cognitive services like for most other technology. And once you are getting familiar with the idea of challenging your common view on what machines can do - the subject of revenue by technology gets even more interesting.
In what areas could cognitive services improve existing revenue streams? Knowing and understanding the inside of your media opens many new opportunities that can improve revenue and help customers to monetize their owned media assets. The first one that comes to mind is of course speech to text - where cognitive services can in best case reach or even exceed the magical benchmark of human understanding (which is roughly at 5% error rate) depending on how purely spoken and what known vocabulary was used with automatic transcribe functionality already today. Automatic speech to text at this level not only free up human resources and saves money otherwise spent on external subtitling services, but also enables a new layer of time based metadata where you actually can navigate in time to find deep linked subjects, names and topics by simply searching the contents of your subtitling in your MAM systems accurate search capabilities and in our case powered by Elastic Search. And this is of course just one of many examples.
It is important to understand the value of temporal metadata since captured reality stored into the video (and audio) file changes every 30-60 frames per second or more - and because of temporal metadata we are able to define accurate time spans for different video and audio content detected by cognitive services. A post house ingesting reality content normally uses human resources for logging and preparing projects for the editors. In these and similar production workflows, the challenge is the huge amount of incoming raw footage that needs to be sorted a










