
Editor's note: This post is part of the AI Decoded series, which demystifies AI by making the technology more accessible, and showcases new hardware, software, tools and accelerations for RTX PC users.
Large language models are driving some of the most exciting developments in AI with their ability to quickly understand, summarize and generate text-based content.
These capabilities power a variety of use cases, including productivity tools, digital assistants, non-playable characters in video games and more. But they're not a one-size-fits-all solution, and developers often must fine-tune LLMs to fit the needs of their applications.
The NVIDIA RTX AI Toolkit makes it easy to fine-tune and deploy AI models on RTX AI PCs and workstations through a technique called low-rank adaptation, or LoRA. A new update, available today, enables support for using multiple LoRA adapters simultaneously within the NVIDIA TensorRT-LLM AI acceleration library, improving the performance of fine-tuned models by up to 6x.
Fine-Tuned for Performance LLMs must be carefully customized to achieve higher performance and meet growing user demands.
These foundational models are trained on huge amounts of data but often lack the context needed for a developer's specific use case. For example, a generic LLM can generate video game dialogue, but it will likely miss the nuance and subtlety needed to write in the style of a woodland elf with a dark past and a barely concealed disdain for authority.
To achieve more tailored outputs, developers can fine-tune the model with information related to the app's use case.
Take the example of developing an app to generate in-game dialogue using an LLM. The process of fine-tuning starts with using the weights of a pretrained model, such as information on what a character may say in the game. To get the dialogue in the right style, a developer can tune the model on a smaller dataset of examples, such as dialogue written in a more spooky or villainous tone.
In some cases, developers may want to run all of these different fine-tuning processes simultaneously. For example, they may want to generate marketing copy written in different voices for various content channels. At the same time, they may want to summarize a document and make stylistic suggestions - as well as draft a video game scene description and imagery prompt for a text-to-image generator.
It's not practical to run multiple models simultaneously, as they won't all fit in GPU memory at the same time. Even if they did, their inference time would be impacted by memory bandwidth - how fast data can be read from memory into GPUs.
Lo(RA) and Behold A popular way to address these issues is to use fine-tuning techniques such as low-rank adaptation. A simple way of thinking of it is as a patch file containing the customizations from the fine-tuning process.
Once trained, customized LoRA adapters can integrate seamlessly with the foundation model during inference, adding minimal overhead. Developers can attach the adapters to a single model to serve multiple use cases. This keeps the memory footprint low while still providing the additional details needed for each specific use case.
Architecture overview of supporting multiple clients and use-cases with a single foundation model using multi-LoRA capabilities In practice, this means that an app can keep just one copy of the base model in memory, alongside many customizations using multiple LoRA adapters.
This process is called multi-LoRA serving. When multiple calls are made to the model, the GPU can process all of the calls in parallel, maximizing the use of its Tensor Cores and minimizing the demands of memory and bandwidth so developers can efficiently use AI models in their workflows. Fine-tuned models using multi-LoRA adapters perform up to 6x faster.
LLM inference performance on GeForce RTX 4090 Desktop GPU for Llama 3B int4 with LoRA adapters applied at runtime. Input sequence length is 43 tokens and output sequence length is 100 tokens. LoRA adapter max rank is 64. In the example of the in-game dialogue application described earlier, the app's scope could be expanded, using multi-LoRA serving, to generate both story elements and illustrations - driven by a single prompt.
The user could input a basic story idea, and the LLM would flesh out the concept, expanding on the idea to provide a detailed foundation. The application could then use the same model, enhanced with two distinct LoRA adapters, to refine the story and generate corresponding imagery. One LoRA adapter generates a Stable Diffusion prompt to create visuals using a locally deployed Stable Diffusion XL model. Meanwhile, the other LoRA adapter, fine-tuned for story writing, could craft a well-structured and engaging narrative.
In this case, the same model is used for both inference passes, ensuring that the space required for the process doesn't significantly increase. The second pass, which involves both text and image generation, is performed using batched inference, making the process exceptionally fast and efficient on NVIDIA GPUs. This allows users to rapidly iterate through different versions of their stories, refining the narrative and the illustrations with ease.
This process is outlined in more detail in a recent technical blog.
LLMs are becoming one of the most important components of modern AI. As adoption and integration grows, demand for powerful, fast LLMs with application-specific customizations will only increase. The multi-LoRA support added today to the RTX AI Toolkit gives developers a powerful new way to accelerate these capabilities.
Most recent headlines
19/12/2025
With Playout Release 2025.4, ToolsOnAir continues to push professional playout w...
19/12/2025
SVG Sit-Down: Diversified's Jared Timmins on AI for Broadcast Sports and Cre...
19/12/2025
2025 SVG Summit Audio Recap: Say What? The Audio Production and Distribution Workshop at the SVG Summit 20 took on issues including speech intelligibility, Next...
19/12/2025
Gamified fun: Channel 5 on its NFL Big Game Night ambitions with Hungry Bear Med...
19/12/2025
College Football Playoff Preview: For ESPN, Round 1 is a Fantastic Yet Familia...
19/12/2025
AWS's Jason Dvorkin on Developing Partnerships With the NBA and PGA Tour, Em...
19/12/2025
Netflix Kicks Off Packed Sports Week with Paul-Joshua Fight Before Shifting to N...
19/12/2025
SVG New Sponsor Spotlight: Presidio's Nareev Shah on the Role of Its Captiva...
19/12/2025
Mounted to the pylon of an AH-1Z Viper helicopter, a Red Wolf vehicle successful...
19/12/2025
L3Harris technology for the SDA Tranche 3 Tracking Layer program will provide in...
19/12/2025
Partnership brings Nielsen ONE measurement activation directly into XR's adv...
19/12/2025
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
19/12/2025
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
19/12/2025
Berklee Announces Spring 2026 Signature Series This season's highlights include the Gospel Extravaganza, the 40th International Folk Festival, special gue...
19/12/2025
Performing arts centres across the globe have doubled down on live production infrastructure in recent years. For venues like the Queensland Performing Arts Cen...
19/12/2025
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
19/12/2025
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
19/12/2025
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
19/12/2025
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
19/12/2025
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
19/12/2025
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
19/12/2025
Ricardo Coke-Thomas Named Chair of Theater for Boston Conservatory at Berklee The distinguished theater educator, director, and performer will join the Conser...
19/12/2025
As the year comes to a close, it's the perfect time to give your WO Automation for Radio system a quick tune up. At the top of your year end checklist is on...
19/12/2025
19 Dec 2025
VEON's Mobilink Microfinance Bank Launches Islamic Banking Oper...
19/12/2025
Wrapping up a year of connection and clarity!
19 Dec Written By Suzanne Costello
As 2025 comes to a close, we want to take a moment to thank our incredib...
19/12/2025
The six-part drama, set in a close-knit Welsh town fractured by an unspeakable c...
19/12/2025
Rohde & Schwarz drives the future of mobility at CES 2026 At the 2026 Consumer Electronics Show in Las Vegas, Rohde & Schwarz will present a powerful lineup o...
19/12/2025
Back to All News
Salvador arrives to Netflix on February 6
Entertainment
19 December 2025
GlobalSpain
Link copied to clipboard
WHEN THERE IS NOTHING LEFT ...
19/12/2025
Back to All News
Last Samurai Standing' Renewed for Season 2 - A Global Se...
19/12/2025
RT is proud to return to the RDS to support the 2026 Stripe Young Scientist & T...
19/12/2025
Nanoparticle vaccine strategy could protect against Ebola and other deadly filoviruses Scripps Research scientists turn nanoparticles into virus showcases to ...
18/12/2025
SVG Campus Shot Callers: Kurt Sutton, Director of Broadcast Operations, Clemson ...
18/12/2025
Follow the Money Episode 2: Inside the Sports Media Biz with Sam McCleery and St...
18/12/2025
SVG Sit-Down: Google Cloud's Anshul Kapoor on the Future of Generative Prod...
18/12/2025
The 2025 SVG Summit Draws Record Crowd for 20th-Annual Sports-Production Industr...
18/12/2025
SBS's sports schedule sizzles in January with Dakar Rally, Kooyong Classic a...
18/12/2025
Canada's largest indoor arena has transformed its live production capabilities with a full ST 2110 infrastructure and Calrec's compact Argo S console. S...
18/12/2025
During November, streaming's share of TV viewing in Mexico settled at 24.2%, an increase of 0.5 share points from the previous month.
Disclaimer: YUMI TV,...
18/12/2025
November continued the upward trend in television viewership. The significantly colder weather and a rich programming lineup encouraged viewers to spend more ti...
18/12/2025
As viewers turn to sports highlights, recaps and documentary programming, expand...
18/12/2025
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
18/12/2025
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
18/12/2025
The HELM, a global expert in cinematic live broadcast and high-end production workflows, has entered a strategic partnership with ARRI, the renowned designer an...
18/12/2025
Cadena Melod a de Colombia (Cadena Melod a), a long-established Colombian radio network, has chosen DHD audio SX2 production consoles for integration into the m...
18/12/2025
Harmonic (NASDAQ: HLIT) today announced that Czech Television (Czech TV), the public broadcaster of the Czech Republic, has teamed up with Harmonic to modernize...
18/12/2025
Broadcast Solutions Group, a leading system integrator and provider of innovative solutions for the broadcast and media industry, has announced the acquisition ...
18/12/2025
Keepit, the SaaS data protection company, announced today that it has been named a Leader in the IDC MarketScape: Worldwide SaaS Data Protection 2025-2026 Vendo...
18/12/2025
Limecraft today announced the release of Limecraft 2025.8, the eighth and final major platform update of the year. This release strengthens daily workflows acro...
18/12/2025
DigitalGlue is very grateful, especially at this time of the year, that its creative.space platform has expanded its footprint within the House of Worship marke...
18/12/2025
TAG Video Systems is proud to share that the company has recently received multiple industry recognitions across the Asia-Pacific region, reflecting its ongoing...