
Businesses across every industry are rolling out AI services this year. For Microsoft, Oracle, Perplexity, Snap and hundreds of other leading companies, using the NVIDIA AI inference platform - a full stack comprising world-class silicon, systems and software - is the key to delivering high-throughput and low-latency inference and enabling great user experiences while lowering cost.
NVIDIA's advancements in inference software optimization and the NVIDIA Hopper platform are helping industries serve the latest generative AI models, delivering excellent user experiences while optimizing total cost of ownership. The Hopper platform also helps deliver up to 15x more energy efficiency for inference workloads compared to previous generations.
AI inference is notoriously difficult, as it requires many steps to strike the right balance between throughput and user experience.
But the underlying goal is simple: generate more tokens at a lower cost. Tokens represent words in a large language model (LLM) system - and with AI inference services typically charging for every million tokens generated, this goal offers the most visible return on AI investments and energy used per task.
Full-stack software optimization offers the key to improving AI inference performance and achieving this goal.
Cost-Effective User Throughput Businesses are often challenged with balancing the performance and costs of inference workloads. While some customers or use cases may work with an out-of-the-box or hosted model, others may require customization. NVIDIA technologies simplify model deployment while optimizing cost and performance for AI inference workloads. In addition, customers can experience flexibility and customizability with the models they choose to deploy.
NVIDIA NIM microservices, NVIDIA Triton Inference Server and the NVIDIA TensorRT library are among the inference solutions NVIDIA offers to suit users' needs:
NVIDIA NIM inference microservices are prepackaged and performance-optimized for rapidly deploying AI foundation models on any infrastructure - cloud, data centers, edge or workstations.
NVIDIA Triton Inference Server, one of the company's most popular open-source projects, allows users to package and serve any model regardless of the AI framework it was trained on.
NVIDIA TensorRT is a high-performance deep learning inference library that includes runtime and model optimizations to deliver low-latency and high-throughput inference for production applications.
Available in all major cloud marketplaces, the NVIDIA AI Enterprise software platform includes all these solutions and provides enterprise-grade support, stability, manageability and security.
With the framework-agnostic NVIDIA AI inference platform, companies save on productivity, development, and infrastructure and setup costs. Using NVIDIA technologies can also boost business revenue by helping companies avoid downtime and fraudulent transactions, increase e-commerce shopping conversion rates and generate new, AI-powered revenue streams.
Cloud-Based LLM Inference To ease LLM deployment, NVIDIA has collaborated closely with every major cloud service provider to ensure that the NVIDIA inference platform can be seamlessly deployed in the cloud with minimal or no code required. NVIDIA NIM is integrated with cloud-native services such as:
Amazon SageMaker AI, Amazon Bedrock Marketplace, Amazon Elastic Kubernetes Service
Google Cloud's Vertex AI, Google Kubernetes Engine
Microsoft Azure AI Foundry coming soon, Azure Kubernetes Service
Oracle Cloud Infrastructure's data science tools, Oracle Cloud Infrastructure Kubernetes Engine
Plus, for customized inference deployments, NVIDIA Triton Inference Server is deeply integrated into all major cloud service providers.
For example, using the OCI Data Science platform, deploying NVIDIA Triton is as simple as turning on a switch in the command line arguments during model deployment, which instantly launches an NVIDIA Triton inference endpoint.
Similarly, with Azure Machine Learning, users can deploy NVIDIA Triton either with no-code deployment through the Azure Machine Learning Studio or full-code deployment with Azure Machine Learning CLI. AWS provides one-click deployment for NVIDIA NIM from SageMaker Marketplace and Google Cloud provides a one-click deployment option on Google Kubernetes Engine (GKE). Google Cloud provides a one-click deployment option on Google Kubernetes Engine, while AWS offers NVIDIA Triton on its AWS Deep Learning containers.
The NVIDIA AI inference platform also uses popular communication methods for delivering AI predictions, automatically adjusting to accommodate the growing and changing needs of users within a cloud-based infrastructure.
From accelerating LLMs to enhancing creative workflows and transforming agreement management, NVIDIA's AI inference platform is driving real-world impact across industries. Learn how collaboration and innovation are enabling the organizations below to achieve new levels of efficiency and scalability.
Serving 400 Million Search Queries Monthly With Perplexity AI Perplexity AI, an AI-powered search engine, handles over 435 million monthly queries. Each query represents multiple AI inference requests. To meet this demand, the Perplexity AI team turned to NVIDIA H100 GPUs, Triton Inference Server and TensorRT-LLM.
Supporting over 20 AI models, including Llama 3 variations like 8B and 70B, Perplexity processes diverse tasks such as search, summarization and question-answering. By using smaller classifier models to route tasks to GPU pods, managed by NVIDIA Triton, the company delivers cost-efficient, responsive service under strict service level agreements.
Through model parallelism, which splits LLMs across GPUs, Perplexity achieved a threefold cost reduction while maintaining low latency and high accurac
More from Nvidia
16/10/2025
GeForce NOW is more than just a platform to stream fresh games every week - it offers celebrations for the gamers who make it epic, with member rewards to sweet...
14/10/2025
AI is transforming the way enterprises build, deploy and scale intelligent applications. As demand surges for enterprise-grade AI applications that offer speed,...
14/10/2025
At Oracle AI World, NVIDIA and Oracle announced they are deepening their collabo...
13/10/2025
The future of AI took flight at Starbase, Texas - where NVIDIA CEO Jensen Huang ...
13/10/2025
At the OCP Global Summit, NVIDIA is offering a glimpse into the future of gigawa...
09/10/2025
NVIDIA Blackwell swept the new SemiAnalysis InferenceMAX v1 benchmarks, deliveri...
09/10/2025
Microsoft Azure today announced the new NDv6 GB300 VM series, delivering the ind...
09/10/2025
Lock, load and stream - the battle is just beginning. EA's highly anticipated Battlefield 6 is set to storm the cloud when it launches tomorrow with GeForce...
08/10/2025
Telecommunication networks are critical infrastructure for every nation, underpi...
02/10/2025
Editor's note: This blog has been updated to include an additional game for October, The Outer Worlds 2.
October is creeping in with plenty of gaming treat...
01/10/2025
Many users want to run large language models (LLMs) locally for more privacy and control, and without subscriptions, but until recently, this meant a trade-off ...
30/09/2025
Quantum computing promises to reshape industries - but progress hinges on solvin...
30/09/2025
Editor's note: This blog is a part of Into the Omniverse, a series focused o...
25/09/2025
Suit up and head for the cloud. Mecha BREAK, the popular third-person shooter, is now available to stream on GeForce NOW with NVIDIA DLSS 4 technology.
Catch i...
24/09/2025
Canada's role as a leader in artificial intelligence was on full display at ...
24/09/2025
Open technologies - made available to developers and businesses to adopt, modify...
23/09/2025
Energy efficiency in large language model inference has improved 100,000x in the...
22/09/2025
OpenAI and NVIDIA just announced a landmark AI infrastructure partnership - an initiative that will scale OpenAI's compute with multi-gigawatt data centers ...
19/09/2025
AI is no longer solely a back-office tool. It's a strategic partner that can...
18/09/2025
The U.K. was the center of the AI world this week as NVIDIA, U.K. and U.S. leade...
18/09/2025
GeForce NOW is packing a monstrous punch this week. Dying Light: The Beast, the latest adrenaline fueled chapter in Techland's parkour meets survival horror...
17/09/2025
Today's creators are equal parts entertainer, producer and gamer, juggling game commentary, scene changes, replay clips, chat moderation and technical troub...
16/09/2025
The U.K. is driving investments in sovereign AI, using the technology to advance...
13/09/2025
Celtic languages - including Cornish, Irish, Scottish Gaelic and Welsh - are the U.K.'s oldest living languages. To empower their speakers, the UK-LLM sover...
10/09/2025
GeForce NOW Blackwell RTX 5080-class SuperPODs are now rolling out, unlocking a new level of ultra high-performance, cinematic cloud gaming.
GeForce NOW Ultima...
09/09/2025
Inference has emerged as the new frontier of complexity in AI. Modern models are...
09/09/2025
As large language models (LLMs) grow larger, they get smarter, with open models from leading developers now featuring hundreds of billions of parameters. At the...
09/09/2025
At this week's AI Infrastructure Summit in Silicon Valley, NVIDIA's VP o...
09/09/2025
Inference performance is critical, as it directly influences the economics of an AI factory. The higher the throughput of AI factory infrastructure, the more to...
09/09/2025
At this week's IAA Mobility conference in Munich, NVIDIA Vice President of A...
09/09/2025
ComfyUI - an open-source, node-based graphical interface for running and buildin...
04/09/2025
NVIDIA today announced new AI education support for K-12 programs at a White House event to celebrate public-private partnerships that advance artificial intell...
04/09/2025
Editor's note: This post is part of the AI On blog series, which explores the latest techniques and real-world applications of agentic AI, chatbots and copi...
04/09/2025
NVIDIA Blackwell RTX is coming to the cloud on Wednesday, Sept. 10 - an upgrade ...
03/09/2025
3D artists are constantly prototyping.
In traditional workflows, modelers must build placeholder, low-fidelity assets to populate 3D scenes, tinkering and adju...
02/09/2025
For more than a century, meteorologists have chased storms with chalkboards, equ...
28/08/2025
Brace yourself, COGs - the Locusts aren't the only thing rising up. The Coal...
28/08/2025
Last week at Gamescom, NVIDIA announced the winners of the NVIDIA and ModDB RTX ...
27/08/2025
AI models are advancing at a rapid rate and scale.
But what might they lack that (most) humans don't? Common sense: an understanding, developed through rea...
25/08/2025
Robots around the world are about to get a lot smarter as physical AI developers...
25/08/2025
As autonomous vehicle systems rapidly grow in complexity, equipped with reasonin...
22/08/2025
As the latest member of the NVIDIA Blackwell architecture family, the NVIDIA Blackwell Ultra GPU builds on core innovations to accelerate training and AI reason...
22/08/2025
AI reasoning, inference and networking will be top of mind for attendees of next...
21/08/2025
Japan is once again building a landmark high-performance computing system - not ...
21/08/2025
From AI assistants doing deep research to autonomous vehicles making split-second navigation decisions, AI adoption is exploding across industries.
Behind ever...
21/08/2025
Across the globe, AI factories are rising - massive new data centers built not to serve up web pages or email, but to train and deploy intelligence itself. Inte...
21/08/2025
Get a glimpse into the future of gaming.
The NVIDIA Blackwell RTX architecture is coming to GeForce NOW in September, marking the service's biggest upgrade...
20/08/2025
Editor's note: This blog is a part of Into the Omniverse, a series focused o...
18/08/2025
With over 175 games now supporting NVIDIA DLSS 4 - a suite of advanced, AI-power...
18/08/2025
At Gamescom, NVIDIA is releasing its first major update to Project G Assist - an...