
Editor's note: This post is part of the AI Decoded series, which demystifies AI by making the technology more accessible, and showcases new hardware, software, tools and accelerations for RTX PC users.
The era of the AI PC is here, and it's powered by NVIDIA RTX and GeForce RTX technologies. With it comes a new way to evaluate performance for AI-accelerated tasks, and a new language that can be daunting to decipher when choosing between the desktops and laptops available.
While PC gamers understand frames per second (FPS) and similar stats, measuring AI performance requires new metrics.
Coming Out on TOPS The first baseline is TOPS, or trillions of operations per second. Trillions is the important word here - the processing numbers behind generative AI tasks are absolutely massive. Think of TOPS as a raw performance metric, similar to an engine's horsepower rating. More is better.
Compare, for example, the recently announced Copilot+ PC lineup by Microsoft, which includes neural processing units (NPUs) able to perform upwards of 40 TOPS. Performing 40 TOPS is sufficient for some light AI-assisted tasks, like asking a local chatbot where yesterday's notes are.
But many generative AI tasks are more demanding. NVIDIA RTX and GeForce RTX GPUs deliver unprecedented performance across all generative tasks - the GeForce RTX 4090 GPU offers more than 1,300 TOPS. This is the kind of horsepower needed to handle AI-assisted digital content creation, AI super resolution in PC gaming, generating images from text or video, querying local large language models (LLMs) and more.
Insert Tokens to Play TOPS is only the beginning of the story. LLM performance is measured in the number of tokens generated by the model.
Tokens are the output of the LLM. A token can be a word in a sentence, or even a smaller fragment like punctuation or whitespace. Performance for AI-accelerated tasks can be measured in tokens per second.
Another important factor is batch size, or the number of inputs processed simultaneously in a single inference pass. As an LLM will sit at the core of many modern AI systems, the ability to handle multiple inputs (e.g. from a single application or across multiple applications) will be a key differentiator. While larger batch sizes improve performance for concurrent inputs, they also require more memory, especially when combined with larger models.
The more you batch, the more (time) you save. RTX GPUs are exceptionally well-suited for LLMs due to their large amounts of dedicated video random access memory (VRAM), Tensor Cores and TensorRT-LLM software.
GeForce RTX GPUs offer up to 24GB of high-speed VRAM, and NVIDIA RTX GPUs up to 48GB, which can handle larger models and enable higher batch sizes. RTX GPUs also take advantage of Tensor Cores - dedicated AI accelerators that dramatically speed up the computationally intensive operations required for deep learning and generative AI models. That maximum performance is easily accessed when an application uses the NVIDIA TensorRT software development kit (SDK), which unlocks the highest-performance generative AI on the more than 100 million Windows PCs and workstations powered by RTX GPUs.
The combination of memory, dedicated AI accelerators and optimized software gives RTX GPUs massive throughput gains, especially as batch sizes increase.
Text-to-Image, Faster Than Ever Measuring image generation speed is another way to evaluate performance. One of the most straightforward ways uses Stable Diffusion, a popular image-based AI model that allows users to easily convert text descriptions into complex visual representations.
With Stable Diffusion, users can quickly create and refine images from text prompts to achieve their desired output. When using an RTX GPU, these results can be generated faster than processing the AI model on a CPU or NPU.
That performance is even higher when using the TensorRT extension for the popular Automatic1111 interface. RTX users can generate images from prompts up to 2x faster with the SDXL Base checkpoint - significantly streamlining Stable Diffusion workflows.
ComfyUI, another popular Stable Diffusion user interface, added TensorRT acceleration last week. RTX users can now generate images from prompts up to 60% faster, and can even convert these images to videos using Stable Video Diffuson up to 70% faster with TensorRT.
TensorRT acceleration can be put to the test in the new UL Procyon AI Image Generation benchmark, which delivers speedups of 50% on a GeForce RTX 4080 SUPER GPU compared with the fastest non-TensorRT implementation.
TensorRT acceleration will soon be released for Stable Diffusion 3 - Stability AI's new, highly anticipated text-to-image model - boosting performance by 50%. Plus, the new TensorRT-Model Optimizer enables accelerating performance even further. This results in a 70% speedup compared with the non-TensorRT implementation, along with a 50% reduction in memory consumption.
Of course, seeing is believing - the true test is in the real-world use case of iterating on an original prompt. Users can refine image generation by tweaking prompts significantly faster on RTX GPUs, taking seconds per iteration compared with minutes on a Macbook Pro M3 Max. Plus, users get both speed and security with everything remaining private when running locally on an RTX-powered PC or workstation.
The Results Are in and Open Sourced But don't just take our word for it. The team of AI researchers and engineers behind the open-source Jan.ai recently integrated TensorRT-LLM into its local chatbot app, then tested these optimizations for themselves.
Source: Jan.ai The researchers tested its implementation of TensorRT-LLM against the open-source llama.cpp inference engine across a variety of GPUs and CPUs used by the community. They found that TensorRT is 30-70% faster than llam
Most recent headlines
05/01/2027
Worlds first 802.15.4ab-UWB chip verified by Calterah and Rohde & Schwarz to be ...
01/06/2026
January 6 2026, 05:30 (PST) Dolby Sets the New Standard for Premium Entertainment at CES 2026
Throughout the week, Dolby brings to life the latest innovatio...
01/05/2026
January 5 2026, 18:30 (PST) NBCUniversal's Peacock to Be First Streamer to ...
01/04/2026
January 4 2026, 18:00 (PST) DOLBY AND DOUYIN EMPOWER THE NEXT GENERATON OF CREATORS WITH DOLBY VISION
Douyin Users Can Now Create And Share Videos With Stun...
29/01/2026
Share Share by:
Copy link
Facebook
X
Linkedin
Bluesky
Email...
29/01/2026
Share Share by:
Copy link
Facebook
X
Linkedin
Bluesky
Email...
29/01/2026
Share Share by:
Copy link
Facebook
X
Linkedin
Bluesky
Email...
29/01/2026
Share Share by:
Copy link
Facebook
X
Linkedin
Bluesky
Email...
29/01/2026
Share Share by:
Copy link
Facebook
X
Linkedin
Bluesky
Email...
29/01/2026
Boston Conservatory Orchestra Presents East Coast Premiere of Peter and Leonardo...
28/01/2026
Top L-R: The Liars, Jazz Infernal, Living with a Visionary
Second Row L-R: Paper Trail, The Baddest Speechwriter of All, Crisis Actor
Third Row: The Boys and ...
28/01/2026
Music discovery should feel intuitive and personal. That's why we're continuing to give you more control, so you can ask for what you want, shape what y...
28/01/2026
Today, Charlie Hellman, Spotify's Head of Music, shared the following note on the Spotify for Artists blog that the company paid out more than $11 billion t...
28/01/2026
The National Film and Video Foundation (NFVF), in partnership with the Oudtshoorn Municipality, invites aspiring and emerging filmmakers to apply for the Sediba...
28/01/2026
As demand for more complex live sports coverage grows, Balkan broadcast specialist MVP has upgraded its flagship HD1 progressive OB truck with the installation ...
28/01/2026
Airlines, cruise and tour operators double down on ad spend as Australians' prioritise travel
Sydney January 28, 2026 - New Nielsen Ad Intel data shows a...
28/01/2026
Share Share by:
Copy link
Facebook
X
Linkedin
Bluesky
Email...
28/01/2026
Share Share by:
Copy link
Facebook
X
Linkedin
Bluesky
Email...
28/01/2026
Marshall Electronics launches the CV420-27X, its next-generation ultra-high-definition (UHD) IP camera, at ISE 2026 (Stand 4N900). Engineered for modern IP-base...
28/01/2026
Grass Valley has announced that Television Mobiles Ltd. (TVM), one of Europe's leading independent outside broadcast providers, has carried out a major refu...
28/01/2026
AI, graphics and virtual software power new production capabilities
FOR-A is bringing remarkable new technologies to FOMEX, the Future of Media Exhibition (ex...
28/01/2026
Continuing a longstanding collaboration, Riedel Communications and Nordic media technology company Media Tailor have once again joined forces to deliver a state...
28/01/2026
Pebble has appointed Paul Nagle-Smith as vice president for customer fulfilment, strengthening its senior leadership focus on customer delivery and operational ...
28/01/2026
Cloud playout solutions provider, Veset has announced that leading Mexican broadcaster, TV Azteca is using Veset Nimbus on AWS as a disaster recovery (DR) playo...
28/01/2026
Ensuring it can keep pace with a rapidly evolving live sports market, Balkan broadcast facility provider MVP Most Valuable Production has upgraded its flags...
28/01/2026
Akamai Technologies, Inc. (NASDAQ: AKAM), the cloud solutions provider that powers and protects life online, and Yospace, the leader in dynamic ad insertion tec...
28/01/2026
The renowned Reykjavik City Theatre (RCT) recently underwent a major intercom system upgrade using Clear-Com solutions. This milestone project utilizes Clear-C...
28/01/2026
Luxembourg, January 26, 2026 - SES S.A. ( SES or the Company ), a leading space solutions company, acknowledges the credit rating action announced by Fitch to...
28/01/2026
Share Share by:
Copy link
Facebook
X
Linkedin
Bluesky
Email...
28/01/2026
Share Share by:
Copy link
Facebook
X
Linkedin
Bluesky
Email...
28/01/2026
Share Share by:
Copy link
Facebook
X
Linkedin
Bluesky
Email...
28/01/2026
28 01 2026 - Media release Screen Australia refreshes Market & Audience approach...
28/01/2026
Boston Conservatory Orchestra Premieres a New Piano Concerto by Peter and Leonar...
28/01/2026
Quantum technologies are rapidly emerging as foundational capabilities for economic competitiveness, national security and scientific leadership in the 21st cen...
28/01/2026
28 Jan 2026
VEON Notes Kyivstar Group Publication of Selected Full Year 2025 Fi...
28/01/2026
Rohde & Schwarz to host 2026 edition of its online event Demystifying EMC Rohde & Schwarz invites the global EMC community to join a crucial discussion on pre...
28/01/2026
Back to All News
The Wait Is Over: Teaser Trailer Drops for Jo Nesbo's Dete...
28/01/2026
Back to All News
Netflix Announces a Fictional Miniseries Inspired by the Marta del Castillo Case
Entertainment
28 January 2026
GlobalSpain
Link copied to ...
28/01/2026
Back to All News
Netflix Announces Santiago Mitres New Film Starring Ver nica L...
28/01/2026
Back to All News
Netflix Unveils the Teaser Trailer for Berlin and the Lady wit...
28/01/2026
Stadtwerke Wolfhagen Modernize Customer Management with AEP.energysuite from Arv...
27/01/2026
Click for Japanese version
Tokyo, Japan - January 27, 2026 - Akamai Technolo...
27/01/2026
L-R: Jonathan Cuchacovich, Sonia Kennebeck, Alan Fischer, Daeil Kim, Andrew Sta...
27/01/2026
Today, Spotify is proud to support our partner Backline, an industry-leading men...
27/01/2026
With nearly 29 million monthly listeners and clear momentum on Spotify, Net n Ve...
27/01/2026
January 14, 2026
We are proud to share that 25 Ontario, First Gulf's commercial project located just two minutes from our head office, has been recognized ...
27/01/2026
January 22, 2026
First Gulf is excited to share that a full-building lease has been secured at 625 Bronte Rd in Oakville, part of Bronte Station Business Park,...
27/01/2026
January 23, 2026
First Gulf continues to demonstrate its commitment to high-performance, sustainable real estate with 351 King Street East achieving BOMA BEST ...
27/01/2026
eds3_5_jq(document).ready(function($) { $(#eds_sliderM519).chameleonSlider_2_1({ content_source:......
27/01/2026
Nielsen will continue to drive growth and provide local audience measurement
in...