
Tuesday, March 25, 2025 - 3:40 pm
Print This Story | Subscribe
Story Highlights
Every week there is more clarity about how generative AI and the cloud will open up new ways to create content more efficiently and the most recent example is from the PGA TOUR which used generative AI to create written narrative copy for the TOURCAST app that allowed fans to not only see 30,000+ shots but also read commentary and details about the shots.
Scott Gutterman, PGA TOUR, senior vice president of Digital and Broadcast Technologies, says that when the Tour first began looking at generative AI there was plenty of experimenting but also asking the big question of what generative AI could do to improve Tour-related content creation and, ultimately, serve fans, players, and the tournaments themselves better.
The use of generative AI allowed for the PGA TOUR to add shot descriptions to more than 30,000 shots during THE PLAYERS.
It's really about scaling coverage across the entire event as typically our staff can cover 25 or 30 golfers but with generative AI content generation and data collected through ShotLink powered by CDW, we can tell the story of every single player, he says. And it's not in a basic way like JJ Spaun hit a 300-yard drive and has 125 yards to the hole as we have done that for years and years. But now it will add in context like he just hit his longest drive of the day or at 125 yards out he gets to within 10 feet of the hole 20% of the time. It effectively generates two things: a fact and context about each shot and we're pretty excited because it gives us that foundation to tell a story for every single shot.
Gutterman says the effort has been two years in the making and began working with the TOUR's AI partner AWS and becoming a foundation partner for the AWS Bedrock generative AI tool set.
Bedrock effectively allows you to use almost any generative AI model and a suite of tools to create these types of experiences, he explains. We are using Anthropic's Claude models to create these types of experiences. So, for us, in particular, we're using the Anthropic Claude 3.5 Sonnet and have spent the past year moving away from proof of concepts and instead figuring out to operationalize it.
Moving beyond POCs involved building a set of data acquisition and context services that takes data from ShotLink and can then apply a rules engine so that the prompt engine knows how to use the data and apply context that makes sense (for example, after a player hits the first tee shot of the day off the first hole it doesn't write that the player hit the longest drive of the day).
You have the data coming in and then the context services that talk to a rules base that also includes things like prioritizing what types of things to talk about, he adds. For example we can tell it to talk about greens in regulations on approach shots every three narratives so that the text doesn't become redundant across all the players. We also teach the model with different ways to talk about a drive so that it isn't described the same way every time or the same way a putt would be described. And all of that is fed into the Anthropic cloud model to generate the narratives.
The output narrative from Claude 3.5. Sonnet then goes through a validation service to make sure the ShotLink data referred to in the output matched up with what was input into the system (for example, drive distance). The next step is cosine similarity whereby the system can make sure the text falls within a range of how one would talk about a drive. If it passes those tests the text will then be sent to the publishing engine where there is an additional set of tests to ensure complete accuracy.
During THE PLAYERS Championship the accuracy on the 30,000 shots was around 96% which is where we thought we would be, adds Gutterman.
Gutterman says that the system is currently limited to being text based as the models do not process video, especially live video from a live event.
We are building towards a day where it'll be a combination of live data, live audio, live video and then using a multimodal output to create a video and generate a voice, adds Gutterman. We are working with AWS on synthetic voice that can read off the prompts and we'll eventually get there.
The advantage of Bedrock is it allows the PGA TOUR to be model agnostic and find the best model for the task and, if future models are developed that can do a function at a cheaper cost, make a pivot without any issues.
You'll hear some people talking about a model that can do everything and what we're seeing is that is not the case, says Gutterman. We want to be able to use Antrophic for this instance but use AWS Nova, which is brand new, for things like image recognition and other work. And then there will probably be another model for things like translation.
Gutterman says the PGA TOUR continues to build internal tools for the staff, like story generation tools and the synthetic voice commentary capabilities.
For five years we have not had commentary on the 48 streams of Every Shot Live, it's just been natural sound and stats, explains Gutterman. Our goal is always to have a human telling the story but having two commentators across 48 streams all day is cost prohibitive. But with AI the viewer could turn on commentary the same way they turn on closed captioning. And if they want it in Spanish, they can have that as well.
Those type of personalized services are the end goal for much of the generative AI efforts.
It moves us down the road of hyper-personalization where a fan can get a story at the end of the day with the best video from their favorite players, he ad