
Editor's note: This post is part of the AI Decoded series, which demystifies AI by making the technology more accessible, and showcases new hardware, software, tools and accelerations for RTX PC users.
The era of the AI PC is here, and it's powered by NVIDIA RTX and GeForce RTX technologies. With it comes a new way to evaluate performance for AI-accelerated tasks, and a new language that can be daunting to decipher when choosing between the desktops and laptops available.
While PC gamers understand frames per second (FPS) and similar stats, measuring AI performance requires new metrics.
Coming Out on TOPS The first baseline is TOPS, or trillions of operations per second. Trillions is the important word here - the processing numbers behind generative AI tasks are absolutely massive. Think of TOPS as a raw performance metric, similar to an engine's horsepower rating. More is better.
Compare, for example, the recently announced Copilot+ PC lineup by Microsoft, which includes neural processing units (NPUs) able to perform upwards of 40 TOPS. Performing 40 TOPS is sufficient for some light AI-assisted tasks, like asking a local chatbot where yesterday's notes are.
But many generative AI tasks are more demanding. NVIDIA RTX and GeForce RTX GPUs deliver unprecedented performance across all generative tasks - the GeForce RTX 4090 GPU offers more than 1,300 TOPS. This is the kind of horsepower needed to handle AI-assisted digital content creation, AI super resolution in PC gaming, generating images from text or video, querying local large language models (LLMs) and more.
Insert Tokens to Play TOPS is only the beginning of the story. LLM performance is measured in the number of tokens generated by the model.
Tokens are the output of the LLM. A token can be a word in a sentence, or even a smaller fragment like punctuation or whitespace. Performance for AI-accelerated tasks can be measured in tokens per second.
Another important factor is batch size, or the number of inputs processed simultaneously in a single inference pass. As an LLM will sit at the core of many modern AI systems, the ability to handle multiple inputs (e.g. from a single application or across multiple applications) will be a key differentiator. While larger batch sizes improve performance for concurrent inputs, they also require more memory, especially when combined with larger models.
The more you batch, the more (time) you save. RTX GPUs are exceptionally well-suited for LLMs due to their large amounts of dedicated video random access memory (VRAM), Tensor Cores and TensorRT-LLM software.
GeForce RTX GPUs offer up to 24GB of high-speed VRAM, and NVIDIA RTX GPUs up to 48GB, which can handle larger models and enable higher batch sizes. RTX GPUs also take advantage of Tensor Cores - dedicated AI accelerators that dramatically speed up the computationally intensive operations required for deep learning and generative AI models. That maximum performance is easily accessed when an application uses the NVIDIA TensorRT software development kit (SDK), which unlocks the highest-performance generative AI on the more than 100 million Windows PCs and workstations powered by RTX GPUs.
The combination of memory, dedicated AI accelerators and optimized software gives RTX GPUs massive throughput gains, especially as batch sizes increase.
Text-to-Image, Faster Than Ever Measuring image generation speed is another way to evaluate performance. One of the most straightforward ways uses Stable Diffusion, a popular image-based AI model that allows users to easily convert text descriptions into complex visual representations.
With Stable Diffusion, users can quickly create and refine images from text prompts to achieve their desired output. When using an RTX GPU, these results can be generated faster than processing the AI model on a CPU or NPU.
That performance is even higher when using the TensorRT extension for the popular Automatic1111 interface. RTX users can generate images from prompts up to 2x faster with the SDXL Base checkpoint - significantly streamlining Stable Diffusion workflows.
ComfyUI, another popular Stable Diffusion user interface, added TensorRT acceleration last week. RTX users can now generate images from prompts up to 60% faster, and can even convert these images to videos using Stable Video Diffuson up to 70% faster with TensorRT.
TensorRT acceleration can be put to the test in the new UL Procyon AI Image Generation benchmark, which delivers speedups of 50% on a GeForce RTX 4080 SUPER GPU compared with the fastest non-TensorRT implementation.
TensorRT acceleration will soon be released for Stable Diffusion 3 - Stability AI's new, highly anticipated text-to-image model - boosting performance by 50%. Plus, the new TensorRT-Model Optimizer enables accelerating performance even further. This results in a 70% speedup compared with the non-TensorRT implementation, along with a 50% reduction in memory consumption.
Of course, seeing is believing - the true test is in the real-world use case of iterating on an original prompt. Users can refine image generation by tweaking prompts significantly faster on RTX GPUs, taking seconds per iteration compared with minutes on a Macbook Pro M3 Max. Plus, users get both speed and security with everything remaining private when running locally on an RTX-powered PC or workstation.
The Results Are in and Open Sourced But don't just take our word for it. The team of AI researchers and engineers behind the open-source Jan.ai recently integrated TensorRT-LLM into its local chatbot app, then tested these optimizations for themselves.
Source: Jan.ai The researchers tested its implementation of TensorRT-LLM against the open-source llama.cpp inference engine across a variety of GPUs and CPUs used by the community. They found that TensorRT is 30-70% faster than llam
North America Stories
10/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
10/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
10/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
10/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
10/04/2026
Frequency, the engine behind the worlds leading streaming television channels, today launched its AI platform for Frequency Studio, powering the entire channel ...
09/04/2026
Zixi will demonstrate IP-based live video workflow solutions at NAB Show 2026 (Booth W2057).
The industry is moving quickly toward IP-based distribution as br...
09/04/2026
Global women's elite sports revenues are expected to reach at least $3 billi...
09/04/2026
Monitor engineer Gavin Tempany mixed Kylie Minogue s Tension Tour on a Solid Sta...
09/04/2026
KOKUSAI DENKI Electric America will exhibit at NAB Show 2026 (Booth C5507), debu...
09/04/2026
With the 2025-26 NBA regular season concluded and the playoffs beginning next we...
09/04/2026
Telestream and Mimir have announced an integration connecting Telestream's V...
09/04/2026
Bitmovin has expanded its Live Encoding and Observability solutions to provide r...
09/04/2026
The Nashville Predators and Scripps Sports have announced a multi-year media rights agreement covering local preseason, regular season, and first-round playoff ...
09/04/2026
Advanced Systems Group, LLC has announced a partnership with Beam Dynamics to offer the Beam Asset and License Intelligence Platform to its clients. The platfor...
09/04/2026
Lawo has unveiled Edge One, a combined video and audio stagebox for broadcast and Pro AV workflows. The device will be on display at NAB Show (Booth C2108, Apri...
09/04/2026
The Society of Motion Picture and Television Engineers (SMPTE) will host the SMPTE ST 2110 IP Media Roadshow on Tuesday, April 21, 2026, at the Las Vegas Conven...
09/04/2026
The Atlanta Braves have completed upgrades to video displays in and around Truist Park ahead of the 2026 MLB season. The upgrades include the Delta Out-of-Town ...
09/04/2026
The University of Southern California has contracted Daktronics (NASDAQ: DAKT) of Brookings, South Dakota, to manufacture and install 22 LED displays across fou...
09/04/2026
Backlight, the media technology company behind Iconik and Wildmoka, will showcase its Creative Operations Platform at NAB Show 2026 (Booth N2829, April 19-22). ...
09/04/2026
MotoAmerica and V10 Entertainment have announced a partnership to broadcast MotoAmerica Superbike racing on VICE TV for the 2026 season. Coverage begins live on...
09/04/2026
Proton Camera Innovations has announced the appointment of Tod Musgrave as US Sa...
09/04/2026
Designed specifically for live sports broadcasting, new platform features IP-nat...
09/04/2026
Blending 1990s DNA, modern motion theory, and a distinctly colorful brand identi...
09/04/2026
Technical capability is essential, but long-term success often depends on how we...
09/04/2026
15 feature films, including fiction and documentaries, along with six short film...
09/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
09/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
09/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
09/04/2026
Purpose Built Monitoring From Live Production to Master Control to OTT, Across On Prem and Cloud Environments
At the 2026 NAB Show (April 19-22, Las Vegas Con...
09/04/2026
Purpose Built Monitoring From Live Production to Master Control to OTT, Across On Prem and Cloud Environments
At the 2026 NAB Show (April 19-22, Las Vegas Con...
09/04/2026
New advances meet surge in demand for broadcast-grade IP migration as C-band spectrum auctions approach
LTN announces major enhancements to its purpose-built g...
09/04/2026
Hitomi Broadcast has expanded its sales team with the addition of Nicola Milburn as Technical Sales Manager. In this role, Nicola will work with customers and p...
09/04/2026
Revolutionary combined solution brings sub-second latency, resilient delivery, and workflow orchestration to global broadcasters and digital platforms
Layercak...
09/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
09/04/2026
Yospace exceeded 10 billion dynamically stitched ads in a single month, reaching 11.6 billion as ad-supported streaming surged. Driven by a packed global sports...
09/04/2026
Bitmovin has expanded its Live Encoding and Observability solutions to provide true end to end, real time insights across live streaming workflows, from encodin...
09/04/2026
Leyra has announced the launch of Icelandic public broadcaster R V's streaming service on Samsung and LG Smart TVs. R V is the first public broadcaster to d...
09/04/2026
3Play Media, a global leader in video accessibility and localization, today announced an AI Dubbing solution purpose-built for YouTube creators. The company, wh...
09/04/2026
Big Blue Marble, a provider of broadcast-grade, cloud-native video solutions, has been recognized as an Amazon Web Services (AWS) Managed Services Provider (MSP...
09/04/2026
The Professional Darts Corporation (PDC) has officially launched its revamped global streaming service, PDC TV, in collaboration with Cleeng and sports technolo...
09/04/2026
Cleeng, the Subscriber Retention Management (SRM ) pioneer, today announced a raft of new AI agents for its AI Assistant to accelerate decision-making and autom...
09/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
09/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
09/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
09/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
09/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
09/04/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
09/04/2026
New Charging, Connectivity, and Mounting Solutions Now Available
LAS VEGAS, APRIL 8, 2026 Pliant Technologies will highlight a range of new accessories at th...
09/04/2026
As live production continues its shift to IP, the challenge is no longer adoption it's reliability. At NAB Show 2026 (Booth W2033), Media Links will demon...
09/04/2026
QuickLink, a leading provider of award-winning video production and remote guest contribution solutions, launches its new AI-powered add-on for its StudioPro p...