Sony Pixel Power calrec Sony

More Than Fine: Multi-LoRA Support Now Available in NVIDIA RTX AI Toolkit

28/08/2024

Editor's note: This post is part of the AI Decoded series, which demystifies AI by making the technology more accessible, and showcases new hardware, software, tools and accelerations for RTX PC users.

Large language models are driving some of the most exciting developments in AI with their ability to quickly understand, summarize and generate text-based content.

These capabilities power a variety of use cases, including productivity tools, digital assistants, non-playable characters in video games and more. But they're not a one-size-fits-all solution, and developers often must fine-tune LLMs to fit the needs of their applications.

The NVIDIA RTX AI Toolkit makes it easy to fine-tune and deploy AI models on RTX AI PCs and workstations through a technique called low-rank adaptation, or LoRA. A new update, available today, enables support for using multiple LoRA adapters simultaneously within the NVIDIA TensorRT-LLM AI acceleration library, improving the performance of fine-tuned models by up to 6x.

Fine-Tuned for Performance LLMs must be carefully customized to achieve higher performance and meet growing user demands.

These foundational models are trained on huge amounts of data but often lack the context needed for a developer's specific use case. For example, a generic LLM can generate video game dialogue, but it will likely miss the nuance and subtlety needed to write in the style of a woodland elf with a dark past and a barely concealed disdain for authority.

To achieve more tailored outputs, developers can fine-tune the model with information related to the app's use case.

Take the example of developing an app to generate in-game dialogue using an LLM. The process of fine-tuning starts with using the weights of a pretrained model, such as information on what a character may say in the game. To get the dialogue in the right style, a developer can tune the model on a smaller dataset of examples, such as dialogue written in a more spooky or villainous tone.

In some cases, developers may want to run all of these different fine-tuning processes simultaneously. For example, they may want to generate marketing copy written in different voices for various content channels. At the same time, they may want to summarize a document and make stylistic suggestions - as well as draft a video game scene description and imagery prompt for a text-to-image generator.

It's not practical to run multiple models simultaneously, as they won't all fit in GPU memory at the same time. Even if they did, their inference time would be impacted by memory bandwidth - how fast data can be read from memory into GPUs.

Lo(RA) and Behold A popular way to address these issues is to use fine-tuning techniques such as low-rank adaptation. A simple way of thinking of it is as a patch file containing the customizations from the fine-tuning process.

Once trained, customized LoRA adapters can integrate seamlessly with the foundation model during inference, adding minimal overhead. Developers can attach the adapters to a single model to serve multiple use cases. This keeps the memory footprint low while still providing the additional details needed for each specific use case.

Architecture overview of supporting multiple clients and use-cases with a single foundation model using multi-LoRA capabilities In practice, this means that an app can keep just one copy of the base model in memory, alongside many customizations using multiple LoRA adapters.

This process is called multi-LoRA serving. When multiple calls are made to the model, the GPU can process all of the calls in parallel, maximizing the use of its Tensor Cores and minimizing the demands of memory and bandwidth so developers can efficiently use AI models in their workflows. Fine-tuned models using multi-LoRA adapters perform up to 6x faster.

LLM inference performance on GeForce RTX 4090 Desktop GPU for Llama 3B int4 with LoRA adapters applied at runtime. Input sequence length is 43 tokens and output sequence length is 100 tokens. LoRA adapter max rank is 64. In the example of the in-game dialogue application described earlier, the app's scope could be expanded, using multi-LoRA serving, to generate both story elements and illustrations - driven by a single prompt.

The user could input a basic story idea, and the LLM would flesh out the concept, expanding on the idea to provide a detailed foundation. The application could then use the same model, enhanced with two distinct LoRA adapters, to refine the story and generate corresponding imagery. One LoRA adapter generates a Stable Diffusion prompt to create visuals using a locally deployed Stable Diffusion XL model. Meanwhile, the other LoRA adapter, fine-tuned for story writing, could craft a well-structured and engaging narrative.

In this case, the same model is used for both inference passes, ensuring that the space required for the process doesn't significantly increase. The second pass, which involves both text and image generation, is performed using batched inference, making the process exceptionally fast and efficient on NVIDIA GPUs. This allows users to rapidly iterate through different versions of their stories, refining the narrative and the illustrations with ease.

This process is outlined in more detail in a recent technical blog.

LLMs are becoming one of the most important components of modern AI. As adoption and integration grows, demand for powerful, fast LLMs with application-specific customizations will only increase. The multi-LoRA support added today to the RTX AI Toolkit gives developers a powerful new way to accelerate these capabilities.
LINK: https://blogs.nvidia.com/blog/ai-decoded-multi-lora-rtx/...
See more stories from nvidia

More from Nvidia

21/10/2025

UC Santa Cruz Maps Coastal Flooding With NVIDIA Accelerated Computing

Coastal communities in the U.S. have a 26% chance of flooding within a 30-year period. This percentage is expected to increase due to climate-change-driven sea-...

20/10/2025

NVIDIA and Google Cloud Accelerate Enterprise AI and Industrial Digitalization

NVIDIA and Google Cloud are expanding access to accelerated computing to transform the full spectrum of enterprise workloads, from visual computing to agentic a...

17/10/2025

Open Source AI Week - How Developers and Contributors Are Advancing AI Innovation

NVIDIA's on the ground at Open Source AI Week. Stay tuned for a celebration ...

17/10/2025

The Engines of American-Made Intelligence: NVIDIA and TSMC Celebrate First NVIDIA Blackwell Wafer Produced in the US

AI has ignited a new industrial revolution. NVIDIA and TSMC are working togethe...

16/10/2025

Ready, Set, Reward - GeForce NOW Membership Rewards Await

GeForce NOW is more than just a platform to stream fresh games every week - it offers celebrations for the gamers who make it epic, with member rewards to sweet...

14/10/2025

NVIDIA and Oracle to Accelerate Enterprise AI and Data Processing

AI is transforming the way enterprises build, deploy and scale intelligent applications. As demand surges for enterprise-grade AI applications that offer speed,...

14/10/2025

Oracle and NVIDIA Accelerate Sovereign AI, Enabling Abu Dhabi's AI-Native Government Transformation

At Oracle AI World, NVIDIA and Oracle announced they are deepening their collabo...

13/10/2025

Elon Musk Gets Just-Launched NVIDIA DGX Spark: Petaflop AI Supercomputer Lands at SpaceX

The future of AI took flight at Starbase, Texas - where NVIDIA CEO Jensen Huang ...

13/10/2025

NVIDIA, Partners Drive Next-Gen Efficient Gigawatt AI Factories in Buildup for Vera Rubin

At the OCP Global Summit, NVIDIA is offering a glimpse into the future of gigawa...

09/10/2025

NVIDIA Blackwell Raises Bar in New InferenceMAX Benchmarks, Delivering Unmatched Performance and Efficiency

NVIDIA Blackwell swept the new SemiAnalysis InferenceMAX v1 benchmarks, deliveri...

09/10/2025

Microsoft Azure Unveils World's First NVIDIA GB300 NVL72 Supercomputing Cluster for OpenAI

Microsoft Azure today announced the new NDv6 GB300 VM series, delivering the ind...

09/10/2025

Incoming: Battlefield 6' Lands on GeForce NOW at Launch

Lock, load and stream - the battle is just beginning. EA's highly anticipated Battlefield 6 is set to storm the cloud when it launches tomorrow with GeForce...

08/10/2025

How AI-Powered Wireless Networks Will Revitalize US Global Leadership in Communications

Telecommunication networks are critical infrastructure for every nation, underpi...

02/10/2025

GeForce NOW Brings 17 Games to the Cloud in October for a Spooky Good Time

Editor's note: This blog has been updated to include an additional game for October, The Outer Worlds 2. October is creeping in with plenty of gaming treat...

01/10/2025

How to Get Started With Large Language Models on NVIDIA RTX PCs

Many users want to run large language models (LLMs) locally for more privacy and control, and without subscriptions, but until recently, this meant a trade-off ...

30/09/2025

How Quantum Computing's Biggest Challenges Are Being Solved With Accelerated Computing

Quantum computing promises to reshape industries - but progress hinges on solvin...

30/09/2025

Into the Omniverse: Open-Source Physics Engine and OpenUSD Advance Robot Learning

Editor's note: This blog is a part of Into the Omniverse, a series focused o...

25/09/2025

Pilots Wanted: Stream Mecha BREAK' on GeForce NOW

Suit up and head for the cloud. Mecha BREAK, the popular third-person shooter, is now available to stream on GeForce NOW with NVIDIA DLSS 4 technology. Catch i...

24/09/2025

Canada Goes All In on AI: NVIDIA Joins Nations' Technology Leaders in Montreal to Shape Sovereign AI Strategy

Canada's role as a leader in artificial intelligence was on full display at ...

24/09/2025

Open Secret: How NVIDIA Nemotron Models, Datasets and Techniques Fuel AI Development

Open technologies - made available to developers and businesses to adopt, modify...

23/09/2025

At Climate Week NYC, NVIDIA Details AI's Key Role in the Sustainable Energy Transition

Energy efficiency in large language model inference has improved 100,000x in the...

22/09/2025

NVIDIA, OpenAI Announce the Biggest AI Infrastructure Deployment in History'

OpenAI and NVIDIA just announced a landmark AI infrastructure partnership - an initiative that will scale OpenAI's compute with multi-gigawatt data centers ...

19/09/2025

AI On: How Onboarding Teams of AI Agents Drives Productivity and Revenue for Businesses

AI is no longer solely a back-office tool. It's a strategic partner that can...

18/09/2025

The UK's Goldilocks Moment for AI': NVIDIA, UK and US Leaders Highlight AI Infrastructure Investments

The U.K. was the center of the AI world this week as NVIDIA, U.K. and U.S. leade...

18/09/2025

GeForce NOW Unleashes Dying Light: The Beast' in the Cloud

GeForce NOW is packing a monstrous punch this week. Dying Light: The Beast, the latest adrenaline fueled chapter in Techland's parkour meets survival horror...

17/09/2025

Meet the Streamlabs Streaming Assistant, Accelerated by NVIDIA RTX

Today's creators are equal parts entertainer, producer and gamer, juggling game commentary, scene changes, replay clips, chat moderation and technical troub...

16/09/2025

The AI Makers: NVIDIA Partners in UK Advance Physical and Agentic AI, Robotics, Life Sciences and More

The U.K. is driving investments in sovereign AI, using the technology to advance...

13/09/2025

Reaching Across the Isles: UK-LLM Brings AI to UK Languages With NVIDIA Nemotron

Celtic languages - including Cornish, Irish, Scottish Gaelic and Welsh - are the U.K.'s oldest living languages. To empower their speakers, the UK-LLM sover...

10/09/2025

Paint It Blackwell: GeForce RTX 5080 SuperPOD Rollout Begins

GeForce NOW Blackwell RTX 5080-class SuperPODs are now rolling out, unlocking a new level of ultra high-performance, cinematic cloud gaming. GeForce NOW Ultima...

09/09/2025

NVIDIA Rubin CPX Accelerates Inference Performance and Efficiency for 1M+ Token Context Workloads

Inference has emerged as the new frontier of complexity in AI. Modern models are...

09/09/2025

NVIDIA Blackwell Ultra Sets New Inference Records in MLPerf Debut

As large language models (LLMs) grow larger, they get smarter, with open models from leading developers now featuring hundreds of billions of parameters. At the...

09/09/2025

NVIDIA Partners With AI Infrastructure Ecosystem to Unveil Reference Design for Giga-Scale AI Factories

At this week's AI Infrastructure Summit in Silicon Valley, NVIDIA's VP o...

09/09/2025

NVIDIA Blackwell Ultra Sets the Bar in New MLPerf Inference Benchmark

Inference performance is critical, as it directly influences the economics of an AI factory. The higher the throughput of AI factory infrastructure, the more to...

09/09/2025

Safety First, Always,' NVIDIA VP of Automotive Says, Unveiling the Future of AI-Defined Vehicles at IAA Mobility

At this week's IAA Mobility conference in Munich, NVIDIA Vice President of A...

09/09/2025

Get Started Using Generative AI for Content Creation With ComfyUI and NVIDIA RTX AI PCs

ComfyUI - an open-source, node-based graphical interface for running and buildin...

04/09/2025

NVIDIA Pledges AI Education Funding for K-12 Programs

NVIDIA today announced new AI education support for K-12 programs at a White House event to celebrate public-private partnerships that advance artificial intell...

04/09/2025

AI On: 6 Ways AI Agents Are Raising Team Performance - and How to Measure It

Editor's note: This post is part of the AI On blog series, which explores the latest techniques and real-world applications of agentic AI, chatbots and copi...

04/09/2025

Cloud Gaming to Reach New Heights: GeForce NOW's Blackwell RTX Upgrade Begins Next Week

NVIDIA Blackwell RTX is coming to the cloud on Wednesday, Sept. 10 - an upgrade ...

03/09/2025

Scene It to Believe It: Populate 3D Worlds Quickly With NVIDIA AI Blueprints

3D artists are constantly prototyping. In traditional workflows, modelers must build placeholder, low-fidelity assets to populate 3D scenes, tinkering and adju...

02/09/2025

It's the Humidity: How International Researchers in Poland, Deep Learning and NVIDIA GPUs Could Change the Forecast

For more than a century, meteorologists have chased storms with chalkboards, equ...

28/08/2025

Drop Into the Battle: Gears of War: Reloaded Unleashed' Launches on GeForce NOW

Brace yourself, COGs - the Locusts aren't the only thing rising up. The Coal...

28/08/2025

Game On: How Modders Reimagine Classic Games With NVIDIA RTX Remix and Generative AI

Last week at Gamescom, NVIDIA announced the winners of the NVIDIA and ModDB RTX ...

27/08/2025

How Do You Teach an AI Model to Reason? With Humans

AI models are advancing at a rapid rate and scale. But what might they lack that (most) humans don't? Common sense: an understanding, developed through rea...

25/08/2025

NVIDIA Jetson Thor Unlocks Real-Time Reasoning for General Robotics and Physical AI

Robots around the world are about to get a lot smarter as physical AI developers...

25/08/2025

Take It for a Spin: NVIDIA Rolls Out DRIVE AGX Thor Developer Kit to World's Automotive Developers

As autonomous vehicle systems rapidly grow in complexity, equipped with reasonin...

22/08/2025

Inside NVIDIA Blackwell Ultra: The Chip Powering the AI Factory Era

As the latest member of the NVIDIA Blackwell architecture family, the NVIDIA Blackwell Ultra GPU builds on core innovations to accelerate training and AI reason...

22/08/2025

Hot Topics at Hot Chips: Inference, Networking, AI Innovation at Every Scale - All Built on NVIDIA

AI reasoning, inference and networking will be top of mind for attendees of next...

21/08/2025

RIKEN, Japan's Leading Science Institute, Taps Fujitsu and NVIDIA for Next Flagship Supercomputer

Japan is once again building a landmark high-performance computing system - not ...

21/08/2025

Think SMART: How to Optimize AI Factory Inference Performance

From AI assistants doing deep research to autonomous vehicles making split-second navigation decisions, AI adoption is exploding across industries. Behind ever...

21/08/2025

Gearing Up for the Gigawatt Data Center Age

Across the globe, AI factories are rising - massive new data centers built not to serve up web pages or email, but to train and deploy intelligence itself. Inte...