
Editor's note: This post is part of the AI Decoded series, which demystifies AI by making the technology more accessible, and showcases new hardware, software, tools and accelerations for RTX PC users.
The era of the AI PC is here, and it's powered by NVIDIA RTX and GeForce RTX technologies. With it comes a new way to evaluate performance for AI-accelerated tasks, and a new language that can be daunting to decipher when choosing between the desktops and laptops available.
While PC gamers understand frames per second (FPS) and similar stats, measuring AI performance requires new metrics.
Coming Out on TOPS The first baseline is TOPS, or trillions of operations per second. Trillions is the important word here - the processing numbers behind generative AI tasks are absolutely massive. Think of TOPS as a raw performance metric, similar to an engine's horsepower rating. More is better.
Compare, for example, the recently announced Copilot+ PC lineup by Microsoft, which includes neural processing units (NPUs) able to perform upwards of 40 TOPS. Performing 40 TOPS is sufficient for some light AI-assisted tasks, like asking a local chatbot where yesterday's notes are.
But many generative AI tasks are more demanding. NVIDIA RTX and GeForce RTX GPUs deliver unprecedented performance across all generative tasks - the GeForce RTX 4090 GPU offers more than 1,300 TOPS. This is the kind of horsepower needed to handle AI-assisted digital content creation, AI super resolution in PC gaming, generating images from text or video, querying local large language models (LLMs) and more.
Insert Tokens to Play TOPS is only the beginning of the story. LLM performance is measured in the number of tokens generated by the model.
Tokens are the output of the LLM. A token can be a word in a sentence, or even a smaller fragment like punctuation or whitespace. Performance for AI-accelerated tasks can be measured in tokens per second.
Another important factor is batch size, or the number of inputs processed simultaneously in a single inference pass. As an LLM will sit at the core of many modern AI systems, the ability to handle multiple inputs (e.g. from a single application or across multiple applications) will be a key differentiator. While larger batch sizes improve performance for concurrent inputs, they also require more memory, especially when combined with larger models.
The more you batch, the more (time) you save. RTX GPUs are exceptionally well-suited for LLMs due to their large amounts of dedicated video random access memory (VRAM), Tensor Cores and TensorRT-LLM software.
GeForce RTX GPUs offer up to 24GB of high-speed VRAM, and NVIDIA RTX GPUs up to 48GB, which can handle larger models and enable higher batch sizes. RTX GPUs also take advantage of Tensor Cores - dedicated AI accelerators that dramatically speed up the computationally intensive operations required for deep learning and generative AI models. That maximum performance is easily accessed when an application uses the NVIDIA TensorRT software development kit (SDK), which unlocks the highest-performance generative AI on the more than 100 million Windows PCs and workstations powered by RTX GPUs.
The combination of memory, dedicated AI accelerators and optimized software gives RTX GPUs massive throughput gains, especially as batch sizes increase.
Text-to-Image, Faster Than Ever Measuring image generation speed is another way to evaluate performance. One of the most straightforward ways uses Stable Diffusion, a popular image-based AI model that allows users to easily convert text descriptions into complex visual representations.
With Stable Diffusion, users can quickly create and refine images from text prompts to achieve their desired output. When using an RTX GPU, these results can be generated faster than processing the AI model on a CPU or NPU.
That performance is even higher when using the TensorRT extension for the popular Automatic1111 interface. RTX users can generate images from prompts up to 2x faster with the SDXL Base checkpoint - significantly streamlining Stable Diffusion workflows.
ComfyUI, another popular Stable Diffusion user interface, added TensorRT acceleration last week. RTX users can now generate images from prompts up to 60% faster, and can even convert these images to videos using Stable Video Diffuson up to 70% faster with TensorRT.
TensorRT acceleration can be put to the test in the new UL Procyon AI Image Generation benchmark, which delivers speedups of 50% on a GeForce RTX 4080 SUPER GPU compared with the fastest non-TensorRT implementation.
TensorRT acceleration will soon be released for Stable Diffusion 3 - Stability AI's new, highly anticipated text-to-image model - boosting performance by 50%. Plus, the new TensorRT-Model Optimizer enables accelerating performance even further. This results in a 70% speedup compared with the non-TensorRT implementation, along with a 50% reduction in memory consumption.
Of course, seeing is believing - the true test is in the real-world use case of iterating on an original prompt. Users can refine image generation by tweaking prompts significantly faster on RTX GPUs, taking seconds per iteration compared with minutes on a Macbook Pro M3 Max. Plus, users get both speed and security with everything remaining private when running locally on an RTX-powered PC or workstation.
The Results Are in and Open Sourced But don't just take our word for it. The team of AI researchers and engineers behind the open-source Jan.ai recently integrated TensorRT-LLM into its local chatbot app, then tested these optimizations for themselves.
Source: Jan.ai The researchers tested its implementation of TensorRT-LLM against the open-source llama.cpp inference engine across a variety of GPUs and CPUs used by the community. They found that TensorRT is 30-70% faster than llam
North America Stories
12/03/2026
Utah Scientific Expands Technology Partner Program With Integrations From Audina...
12/03/2026
Techex, a global expert in live video solutions over IP and cloud, announces the appointment of Matt McKee as Senior Director, Sales, Americas, further strength...
12/03/2026
KOKUSAI DENKI Electric America has appointed Mondae Hott as Regional Sales Manag...
12/03/2026
At the 2026 NAB Show, Interra Systems will showcase its latest advancements in a...
12/03/2026
The 15th National Games of China concluded after a two-week celebration of athletic excellence and regional collaboration. Held from Nov. 9-21 across Guangdong,...
12/03/2026
Live-production academic program Butler Sports Live produced a total of 40 fall-...
12/03/2026
The University of Nebraska's HuskerVision has completed the second phase of ...
12/03/2026
Grass Valley and integration partner Tab M Solutions have completed Phase 1 of a...
12/03/2026
The broadcaster expands its campus-production model as the university handles tw...
12/03/2026
Disney has announced the addition of March Madness - the NCAA Division I Men...
12/03/2026
Apple TV's Friday Night Baseball MLB doubleheader series returns for its f...
12/03/2026
The senior from New Jersey is making his mark in South Bend, both on the mic and behind it...
12/03/2026
After a relatively quiet January, the month of February was jammed packed with l...
12/03/2026
Long-time production partner Echo Entertainment is producing the broadcast, while Cosm played a vital role in the collaboration...
12/03/2026
By Jessica Herndon
We love kicking off each year by introducing the world to po...
12/03/2026
Samrat Chakrabarti, George Basil, Kiran Deol, Katie McCuen and Vishal Vijayakumar attend the 2025 Sundance Film Festival premiere of Didn't Die at the Lib...
12/03/2026
MELBOURNE, Fla., March 11, 2026 - L3Harris Technologies (NYSE: LHX) and Shield AI have successfully demonstrated a first-of-its-kind integration combining L3Har...
12/03/2026
The incorporation of Artificial Intelligence and Machine Learning into modern, converged all-domain systems is enabling true Joint Electromagnetic Spectrum Oper...
12/03/2026
MELBOURNE, Fla., March 12, 2026 - L3Harris Technologies (NYSE: LHX) today announ...
12/03/2026
Cost pressures, switching intent and demand for savings and credit products are ...
12/03/2026
For the first time, Nielsen breaks out demographic information about FAST and AV...
12/03/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
12/03/2026
Appear's high-performance, ultra-low latency encoding platform augments LTN's fully managed global IP network and orchestration platform
LTN, a leader ...
12/03/2026
2026 NAB Show Exhibitor Preview
April 18-22
Las Vegas
Booth C3519
Summary:
At the 2026 NAB Show in Las Vegas, Boland Communications will be showing the bro...
12/03/2026
Riedel Communications today announced the continued expansion of its Managed Technology Division in the Americas and the appointment of Jan Schaffner as Vice Pr...
12/03/2026
Grass Valley and integration partner Tab M Solutions have completed Phase 1 of a new broadcast production control room for the University of Illinois Division o...
12/03/2026
NAKIVO, a global provider of backup and ransomware recovery solutions, announces the general availability of NAKIVO Backup & Replication v11.2. This release exp...
12/03/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
12/03/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
12/03/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
12/03/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
12/03/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
12/03/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
12/03/2026
COW Jobs: Editor de V deo - Direct Response, Performance Ads - Brazil, Remote
Brie Clayton March 11, 2026
0 Comments
Editor(a) de V deo (Direct Respon...
12/03/2026
Avatar: Fire and Ash Graded with DaVinci Resolve Studio
Brie Clayton March 11, 2026
0 Comments
Colorist delivers premium cinematic color across 2D, 3D...
12/03/2026
Boston Conservatory to Timoth e Chalamet: We Care About Ballet and Opera Boston Conservatory at Berklee students and faculty respond to the actors recent comm...
12/03/2026
Back to All News
Netflix Develops an Unprecedented Series About the Passionate ...
12/03/2026
Back to All News
Netflix Opens Eyeline Studios in Hyderabad, Anchoring Long-Ter...
12/03/2026
Back to All News
A Documentary About Ronaldinho Ga cho Leads Netflix's Slat...
12/03/2026
Editor's note: This post is part of Into the Omniverse, a series focused on ...
12/03/2026
GeForce NOW is bringing the game to the Game Developers Conference (GDC), running this week in San Francisco. While developers build the future of gaming, GeFor...
11/03/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
11/03/2026
Matrox Video will showcase its vision for the future of live production at NAB 2026 in Las Vegas, April 19-22, highlighting how broadcasters and media organizat...
11/03/2026
Geneva-based technology company, GlobalM SA, is presenting its GMX Distributed Video Gateway, a software-defined IP media transport platform designed to replace...
11/03/2026
Backlight (booth #N2829), the company behind Iconik and Wildmoka, which power video workflows for large media and entertainment organizations, has released the ...
11/03/2026
QuickLink, a leading provider of award-winning video production and remote guest contribution solutions, presents its latest StudioEdge models at The NAB Show ...
11/03/2026
Telestream, a global leader in media workflow technologies, today announced the expansion of Telestream Cloud Services with the introduction of UP, a new cloud-...
11/03/2026
Operative, the preferred advertising management provider for the world's leading media brands, today announced the launch of AOS for digital media, an AI-po...
11/03/2026
Calrec will be located in Central Hall, on Booth C6907
Choice without compromise
The broadcast industry is going through a rapid evolution that s signalling a...
11/03/2026
The new service is hosted and operated entirely in the Netherlands, combining data sovereignty, resilience, scalability, and predictable costs without relying...