
Businesses across every industry are rolling out AI services this year. For Microsoft, Oracle, Perplexity, Snap and hundreds of other leading companies, using the NVIDIA AI inference platform - a full stack comprising world-class silicon, systems and software - is the key to delivering high-throughput and low-latency inference and enabling great user experiences while lowering cost.
NVIDIA's advancements in inference software optimization and the NVIDIA Hopper platform are helping industries serve the latest generative AI models, delivering excellent user experiences while optimizing total cost of ownership. The Hopper platform also helps deliver up to 15x more energy efficiency for inference workloads compared to previous generations.
AI inference is notoriously difficult, as it requires many steps to strike the right balance between throughput and user experience.
But the underlying goal is simple: generate more tokens at a lower cost. Tokens represent words in a large language model (LLM) system - and with AI inference services typically charging for every million tokens generated, this goal offers the most visible return on AI investments and energy used per task.
Full-stack software optimization offers the key to improving AI inference performance and achieving this goal.
Cost-Effective User Throughput Businesses are often challenged with balancing the performance and costs of inference workloads. While some customers or use cases may work with an out-of-the-box or hosted model, others may require customization. NVIDIA technologies simplify model deployment while optimizing cost and performance for AI inference workloads. In addition, customers can experience flexibility and customizability with the models they choose to deploy.
NVIDIA NIM microservices, NVIDIA Triton Inference Server and the NVIDIA TensorRT library are among the inference solutions NVIDIA offers to suit users' needs:
NVIDIA NIM inference microservices are prepackaged and performance-optimized for rapidly deploying AI foundation models on any infrastructure - cloud, data centers, edge or workstations.
NVIDIA Triton Inference Server, one of the company's most popular open-source projects, allows users to package and serve any model regardless of the AI framework it was trained on.
NVIDIA TensorRT is a high-performance deep learning inference library that includes runtime and model optimizations to deliver low-latency and high-throughput inference for production applications.
Available in all major cloud marketplaces, the NVIDIA AI Enterprise software platform includes all these solutions and provides enterprise-grade support, stability, manageability and security.
With the framework-agnostic NVIDIA AI inference platform, companies save on productivity, development, and infrastructure and setup costs. Using NVIDIA technologies can also boost business revenue by helping companies avoid downtime and fraudulent transactions, increase e-commerce shopping conversion rates and generate new, AI-powered revenue streams.
Cloud-Based LLM Inference To ease LLM deployment, NVIDIA has collaborated closely with every major cloud service provider to ensure that the NVIDIA inference platform can be seamlessly deployed in the cloud with minimal or no code required. NVIDIA NIM is integrated with cloud-native services such as:
Amazon SageMaker AI, Amazon Bedrock Marketplace, Amazon Elastic Kubernetes Service
Google Cloud's Vertex AI, Google Kubernetes Engine
Microsoft Azure AI Foundry coming soon, Azure Kubernetes Service
Oracle Cloud Infrastructure's data science tools, Oracle Cloud Infrastructure Kubernetes Engine
Plus, for customized inference deployments, NVIDIA Triton Inference Server is deeply integrated into all major cloud service providers.
For example, using the OCI Data Science platform, deploying NVIDIA Triton is as simple as turning on a switch in the command line arguments during model deployment, which instantly launches an NVIDIA Triton inference endpoint.
Similarly, with Azure Machine Learning, users can deploy NVIDIA Triton either with no-code deployment through the Azure Machine Learning Studio or full-code deployment with Azure Machine Learning CLI. AWS provides one-click deployment for NVIDIA NIM from SageMaker Marketplace and Google Cloud provides a one-click deployment option on Google Kubernetes Engine (GKE). Google Cloud provides a one-click deployment option on Google Kubernetes Engine, while AWS offers NVIDIA Triton on its AWS Deep Learning containers.
The NVIDIA AI inference platform also uses popular communication methods for delivering AI predictions, automatically adjusting to accommodate the growing and changing needs of users within a cloud-based infrastructure.
From accelerating LLMs to enhancing creative workflows and transforming agreement management, NVIDIA's AI inference platform is driving real-world impact across industries. Learn how collaboration and innovation are enabling the organizations below to achieve new levels of efficiency and scalability.
Serving 400 Million Search Queries Monthly With Perplexity AI Perplexity AI, an AI-powered search engine, handles over 435 million monthly queries. Each query represents multiple AI inference requests. To meet this demand, the Perplexity AI team turned to NVIDIA H100 GPUs, Triton Inference Server and TensorRT-LLM.
Supporting over 20 AI models, including Llama 3 variations like 8B and 70B, Perplexity processes diverse tasks such as search, summarization and question-answering. By using smaller classifier models to route tasks to GPU pods, managed by NVIDIA Triton, the company delivers cost-efficient, responsive service under strict service level agreements.
Through model parallelism, which splits LLMs across GPUs, Perplexity achieved a threefold cost reduction while maintaining low latency and high accurac
More from Nvidia
07/01/2026
AI has transformed retail and consumer packaged goods (CPG) operations, enhancin...
05/01/2026
At the CES trade show running this week in Las Vegas, NVIDIA announced that the ...
05/01/2026
Open-source AI is accelerating innovation across industries, and NVIDIA DGX Spar...
05/01/2026
NVIDIA DGX SuperPOD is paving the way for large-scale system deployments built on the NVIDIA Rubin platform - the next leap forward in AI computing.
At the CES...
05/01/2026
AI is powering breakthroughs across industries, helping enterprises operate with...
05/01/2026
NVIDIA founder and CEO Jensen Huang took the stage at the Fontainebleau Las Vega...
05/01/2026
At the CES trade show, NVIDIA today announced DLSS 4.5, which introduces Dynamic...
05/01/2026
2025 marked a breakout year for AI development on PC.
PC-class small language m...
05/01/2026
Announced at the CES trade show running this week in Las Vegas, NVIDIA is bringi...
01/01/2026
New year, new games, all with RTX 5080-powered cloud energy. GeForce NOW is kicking off 2026 by looking back at an unforgettable year of wins and wildly high fr...
25/12/2025
Holiday lights are twinkling, hot cocoa's on the stove and gamers are settling in for a well-earned break.
Whether staying in or heading on a winter getawa...
22/12/2025
The works of Plato state that when humans have an experience, some level of change occurs in their brain, which is powered by memory - specifically long-term me...
18/12/2025
NVIDIA will join the U.S. Department of Energy's (DOE) Genesis Mission as a ...
18/12/2025
Top-notch options for AI at the desktops of developers, engineers and designers ...
18/12/2025
Step out of the vault and into the future of gaming with Fallout: New Vegas streaming on GeForce NOW, just in time to celebrate the newest season of the hit Ama...
17/12/2025
The Hao AI Lab research team at the University of California San Diego - at the forefront of pioneering AI model innovation - recently received an NVIDIA DGX B...
17/12/2025
Editor's note: This post is part of Into the Omniverse, a series focused on ...
15/12/2025
NVIDIA today announced it has acquired SchedMD - the leading developer of Slurm, an open-source workload management system for high-performance computing (HPC) ...
15/12/2025
Modern workflows showcase the endless possibilities of generative and agentic AI on PCs.
Of many, some examples include tuning a chatbot to handle product-supp...
12/12/2025
In Las Vegas's T-Mobile Arena, fans of the Golden Knights are getting more than just hockey - they're getting a taste of the future. ADAM, a robot devel...
11/12/2025
Unveiling what it describes as the most capable model series yet for professional knowledge work, OpenAI launched GPT-5.2 today. The model was trained and deplo...
11/12/2025
Hunters, saddle up - adventure awaits in the cloud.
Journey into the world of M...
10/12/2025
The NVIDIA accelerated computing platform is leading supercomputing benchmarks once dominated by CPUs, enabling AI, science, business and computing efficiency w...
10/12/2025
The world's top-performing system for graph processing at scale was built on...
10/12/2025
As the scale and complexity of AI infrastructure grows, data center operators need continuous visibility into factors including performance, temperature and pow...
04/12/2025
Developers, researchers, hobbyists and students can take a byte out of holiday s...
04/12/2025
Editor's note: The Game Pass edition of Hogwarts Legacy' will also be supported on GeForce NOW when the Steam and Epic Games Store versions launch on t...
03/12/2025
The top 10 most intelligent open-source models all use a mixture-of-experts arch...
02/12/2025
Today, Mistral AI announced the Mistral 3 family of open-source multilingual, multimodal models, optimized across NVIDIA supercomputing and edge platforms.
M...
02/12/2025
At AWS re:Invent, NVIDIA and Amazon Web Services expanded their strategic collab...
01/12/2025
Researchers worldwide rely on open-source technologies as the foundation of their work. To equip the community with the latest advancements in digital and physi...
27/11/2025
Black Friday is leveling up. Get ready to score one of the biggest deals of the season - 50% off the first three months of a new GeForce NOW Ultimate membership...
25/11/2025
Black Forest Labs - the frontier AI research lab developing visual generative AI models - today released the FLUX.2 family of state-of-the-art image generation ...
24/11/2025
Editor's note: This post is part of the AI On blog series, which explores the latest techniques and real-world applications of agentic AI, chatbots and copi...
20/11/2025
Editor's note: This post is part of Into the Omniverse, a series focused on how developers, 3D practitioners and enterprises can transform their workflows u...
20/11/2025
The NVIDIA Blackwell RTX upgrade is nearing the finish line, letting GeForce NOW Ultimate members across the globe experience true next-generation cloud gaming ...
20/11/2025
Tanya Berger-Wolf's first computational biology project started as a bet wit...
18/11/2025
Timed with the Microsoft Ignite conference running this week, NVIDIA is expandin...
18/11/2025
Today, Microsoft, NVIDIA and Anthropic announced new strategic partnerships. Anthropic is scaling its rapidly growing Claude AI model on Microsoft Azure, powere...
18/11/2025
AI agents have the potential to become indispensable tools for automating complex tasks. But bringing agents to production remains challenging.
According to Ga...
17/11/2025
NVIDIA Apollo - a family of open models for accelerating industrial and computat...
17/11/2025
To power future technologies including liquid-cooled data centers, high-resoluti...
17/11/2025
At SC25, NVIDIA unveiled advances across NVIDIA BlueField DPUs, next-generation networking, quantum computing, national research, AI physics and more - as accel...
17/11/2025
Across quantum physics, digital biology and climate research, the world's researchers are harnessing a universal scientific instrument to chart new frontier...
17/11/2025
It used to be that computing power trickled down from hulking supercomputers to ...
14/11/2025
Today's AI workloads are data-intensive, requiring more scalable and afforda...
13/11/2025
Editor's note: This post is part of the AI On blog series, which explores the latest techniques and real-world applications of agentic AI, chatbots and copi...
13/11/2025
Chaos has entered the chat. It's GFN Thursday, and things are getting intense with the launch of Call of Duty: Black Ops 7, streaming at launch this week on...
12/11/2025
In the age of AI reasoning, training smarter, more capable models is critical to scaling intelligence. Delivering the massive performance to meet this new age r...
12/11/2025
Large language model (LLM)-based AI assistants are powerful productivity tools, but without the right context and information, they can struggle to provide nuan...