From concept to kick-off: How TAMS could transform sports workflows By Paul Markham Tuesday, October 28, 2025 - 09:43
Print This Story
Techex tx darwin provided the ingest and routing layer during initial demonstrations of the architecture
When BBC R&D first outlined the Time-Addressable Media Store (TAMS), it sounded like a research concept with niche potential. But as real-world tests begin from Reuters' newsroom pilots to Ravensbourne University's 24-hour On Air' broadcast the possibilities for live sport are becoming clearer.
In the first part of this article, we explored how TAMS rethinks storage and contribution by treating video, audio and data as time-indexed sources rather than files. An open API standard embraced by partners including AWS, Drastic, Techex and Reuters, TAMS replaces duplication and file transfer with immediate, time-addressed access to multiple media components based on sources and flows. We looked at how this reference, don't replicate' model can streamline news and sports workflows, power real-time clipping, and make time not files the currency of media production.
So what happens when real-world sports workflows meet this model? In a typical outside broadcast or remote production today, camera feeds are ingested in parallel to multiple systems including replay servers such as EVS XT-VIA or GV LiveTouch, production switchers, multiviewers and central SAN or NAS storage. Audio feeds often travel separately and everything relies on timecode or PTP to lock it together. Logging teams tag moments; highlights editors move files to shared storage; and social producers wait for proxy renders to appear.
TAMS collapses these parallel paths into one synchronised store, letting all tools access the same material by time reference rather than duplicate ingest. The replay operator can call up the same moments the highlights editor is cutting, and an AI engine can analyse them in parallel all referencing the same segment of time, not a separate copy of the media. Instead of multiple ingest servers and file systems, a TAMS deployment acts as an addressable timeline for the entire event. Ultra-low-latency live for the biggest events will still rely on traditional paths, but the near-real-time performance from TAMS is fast enough to handle everything else.
The first demonstrations of this architecture came from BBC R&D working with industry partners:
Techex tx darwin provides the ingest and routing layer, taking SMPTE 2110, SRT or RIST feeds from the field and converting them into time-indexed Sources within a TAMS store.
AWS Media Services' Cloud Native Agile Production (CNAP) project provides Amazon S3 for storage, DynamoDB for indexing, Lambda with MediaConvert for on-demand rendering of Flows, and access to other tools.
Drastic Technologies supplies its MediaReactor plug-in for Adobe Premiere Pro, which includes a TAMS reader and was demonstrated in the AWS-BBC CNAP workflow.
Reuters, using its Imagen MAM platform, acts as a high-volume content generator, testing how the TAMS API can accelerate newsroom and remote-event workflows.
The most ambitious implementation to date was this month's Global Media and Entertainment Talent Manifesto On Air broadcast a continuous 24-hour live programme based at Ravensbourne University in London and produced by students around the world. Using Techex tx darwin, all the live studio feeds were ingested into an AWS S3 based TAMS store as one continuous, time-addressable Source. Students working remotely were able to access the store through the Drastic plug-in, clipping highlights and social packages directly from the live timeline.
That demonstration provides a glimpse of what's now possible. Replace those student studios with football pitches or cricket grounds and the same logic applies from top-tier events to regional and lower-tier sports. Ingest once, then allow logging, replay, highlights and social teams to operate concurrently on the same timeline. The replay operator marks the goal; the logger tags it for metadata; the social editor pulls a vertical clip; and an AI agent updates statistics all from the same segment of the TAMS timeline without fumbling or delay.
The immediate gain is turnaround speed: fewer transfers, fewer conversions and faster collaboration between teams, with less storage needed. But the long-term benefit is architectural: compute tasks such as captioning, QC or compliance could run as serverless microservices. For remote productions, this means lighter OB trucks, smaller on-site crews and reduced cloud egress costs only the specific ranges needed for output are ever processed.
Because every frame and metadata element lives in a single, time-addressable structure, the same store can double as a searchable archive. Rights teams can query by event, player or sponsor and instantly retrieve all relevant sequences. Clips no longer need to be exported, logged and re-ingested into separate systems they already exist, addressable by time. This continuity between live, near-live and archive has potential for automated highlights and personalised VOD.
ChallengeImpact
Live latency HTTP/object-store access is slower than traditional live workflow Fine for near-real-time, highlights and lower tiers, not for top-flight live
Integration Legacy replay and MAM tools expect files Middleware bridges needed until more vendor support arrives
Vendor plurality Early work dominated by specific vendors Risk to more widespread adoption (but mitigated by open standard)
Cost and scale Frequent reads and rendering driving cost rather than large egress Cloud cost uncertainty and variability
Skills Requires software/cloud literacy among support engineers Training and cultural change essential
The biggest shift is not the technology itself but the way peo










