
Editor's note: This post is part of the AI Decoded series, which demystifies AI by making the technology more accessible, and showcases new hardware, software, tools and accelerations for GeForce RTX PC and NVIDIA RTX workstation users.
From games and content creation apps to software development and productivity tools, AI is increasingly being integrated into applications to enhance user experiences and boost efficiency.
Those efficiency boosts extend to everyday tasks, like web browsing. Brave, a privacy-focused web browser, recently launched a smart AI assistant called Leo AI that, in addition to providing search results, helps users summarize articles and videos, surface insights from documents, answer questions and more.
Leo AI helps users summarize articles and videos, surface insights from documents, answer questions and more. The technology behind Brave and other AI-powered tools is a combination of hardware, libraries and ecosystem software that's optimized for the unique needs of AI.
Why Software Matters NVIDIA GPUs power the world's AI, whether running in the data center or on a local PC. They contain Tensor Cores, which are specifically designed to accelerate AI applications like Leo AI through massively parallel number crunching - rapidly processing the huge number of calculations needed for AI simultaneously, rather than doing them one at a time.
But great hardware only matters if applications can make efficient use of it. The software running on top of GPUs is just as critical for delivering the fastest, most responsive AI experience.
The first layer is the AI inference library, which acts like a translator that takes requests for common AI tasks and converts them to specific instructions for the hardware to run. Popular inference libraries include NVIDIA TensorRT, Microsoft's DirectML and the one used by Brave and Leo AI via Ollama, called llama.cpp.
Llama.cpp is an open-source library and framework. Through CUDA - the NVIDIA software application programming interface that enables developers to optimize for GeForce RTX and NVIDIA RTX GPUs - provides Tensor Core acceleration for hundreds of models, including popular large language models (LLMs) like Gemma, Llama 3, Mistral and Phi.
On top of the inference library, applications often use a local inference server to simplify integration. The inference server handles tasks like downloading and configuring specific AI models so that the application doesn't have to.
Ollama is an open-source project that sits on top of llama.cpp and provides access to the library's features. It supports an ecosystem of applications that deliver local AI capabilities. Across the entire technology stack, NVIDIA works to optimize tools like Ollama for NVIDIA hardware to deliver faster, more responsive AI experiences on RTX.
Applications like Brave's Leo AI can access RTX-powered AI acceleration to enhance user experiences. NVIDIA's focus on optimization spans the entire technology stack - from hardware to system software to the inference libraries and tools that enable applications to deliver faster, more responsive AI experiences on RTX.
Local vs. Cloud Brave's Leo AI can run in the cloud or locally on a PC through Ollama.
There are many benefits to processing inference using a local model. By not sending prompts to an outside server for processing, the experience is private and always available. For instance, Brave users can get help with their finances or medical questions without sending anything to the cloud. Running locally also eliminates the need to pay for unrestricted cloud access. With Ollama, users can take advantage of a wider variety of open-source models than most hosted services, which often support only one or two varieties of the same AI model.
Users can also interact with models that have different specializations, such as bilingual models, compact-sized models, code generation models and more.
RTX enables a fast, responsive experience when running AI locally. Using the Llama 3 8B model with llama.cpp, users can expect responses up to 149 tokens per second - or approximately 110 words per second. When using Brave with Leo AI and Ollama, this means snappier responses to questions, requests for content summaries and more.
NVIDIA internal throughput performance measurements on NVIDIA GeForce RTX GPUs, featuring a Llama 3 8B model with an input sequence length of 100 tokens, generating 100 tokens. Get Started With Brave With Leo AI and Ollama Installing Ollama is easy - download the installer from the project's website and let it run in the background. From a command prompt, users can download and install a wide variety of supported models, then interact with the local model from the command line.
For simple instructions on how to add local LLM support via Ollama, read the company's blog. Once configured to point to Ollama, Leo AI will use the locally hosted LLM for prompts and queries. Users can also switch between cloud and local models at any time.
Brave with Leo AI running on Ollama and accelerated by RTX is a great way to get more out of your browsing experience. You can even summarize and ask questions about AI Decoded blogs! Developers can learn more about how to use Ollama and llama.cpp in the NVIDIA Technical Blog.
Generative AI is transforming gaming, videoconferencing and interactive experiences of all kinds. Make sense of what's new and what's next by subscribing to the AI Decoded newsletter.
Most recent headlines
19/12/2025
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
19/12/2025
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
19/12/2025
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
19/12/2025
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
19/12/2025
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
19/12/2025
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
19/12/2025
Ricardo Coke-Thomas Named Chair of Theater for Boston Conservatory at Berklee The distinguished theater educator, director, and performer will join the Conser...
19/12/2025
Nanoparticle vaccine strategy could protect against Ebola and other deadly filoviruses Scripps Research scientists turn nanoparticles into virus showcases to ...
18/12/2025
SVG Campus Shot Callers: Kurt Sutton, Director of Broadcast Operations, Clemson ...
18/12/2025
Follow the Money Episode 2: Inside the Sports Media Biz with Sam McCleery and St...
18/12/2025
SVG Sit-Down: Google Cloud's Anshul Kapoor on the Future of Generative Prod...
18/12/2025
The 2025 SVG Summit Draws Record Crowd for 20th-Annual Sports-Production Industr...
18/12/2025
SBS's sports schedule sizzles in January with Dakar Rally, Kooyong Classic a...
18/12/2025
Canada's largest indoor arena has transformed its live production capabilities with a full ST 2110 infrastructure and Calrec's compact Argo S console. S...
18/12/2025
During November, streaming's share of TV viewing in Mexico settled at 24.2%, an increase of 0.5 share points from the previous month.
Disclaimer: YUMI TV,...
18/12/2025
November continued the upward trend in television viewership. The significantly colder weather and a rich programming lineup encouraged viewers to spend more ti...
18/12/2025
As viewers turn to sports highlights, recaps and documentary programming, expand...
18/12/2025
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
18/12/2025
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
18/12/2025
The HELM, a global expert in cinematic live broadcast and high-end production workflows, has entered a strategic partnership with ARRI, the renowned designer an...
18/12/2025
Cadena Melod a de Colombia (Cadena Melod a), a long-established Colombian radio network, has chosen DHD audio SX2 production consoles for integration into the m...
18/12/2025
Harmonic (NASDAQ: HLIT) today announced that Czech Television (Czech TV), the public broadcaster of the Czech Republic, has teamed up with Harmonic to modernize...
18/12/2025
Broadcast Solutions Group, a leading system integrator and provider of innovative solutions for the broadcast and media industry, has announced the acquisition ...
18/12/2025
Keepit, the SaaS data protection company, announced today that it has been named a Leader in the IDC MarketScape: Worldwide SaaS Data Protection 2025-2026 Vendo...
18/12/2025
Limecraft today announced the release of Limecraft 2025.8, the eighth and final major platform update of the year. This release strengthens daily workflows acro...
18/12/2025
DigitalGlue is very grateful, especially at this time of the year, that its creative.space platform has expanded its footprint within the House of Worship marke...
18/12/2025
TAG Video Systems is proud to share that the company has recently received multiple industry recognitions across the Asia-Pacific region, reflecting its ongoing...
18/12/2025
NDI, the leading video connectivity standard for AV-over-IP, and Zoom, the AI-first collaboration platform, announce a strategic collaboration to integrate the ...
18/12/2025
Leading video software provider, Synamedia, today announced that it is extending its long-standing relationship with YES, the pay-TV subsidiary of the largest I...
18/12/2025
Riedel Communications today announced it provided a fully integrated communications and commentary solution for the 15th National Games of China, supporting 56 ...
18/12/2025
When both the Toledo Walleye and Toledo Mud Hens play at home on the same night, communication between their respective production teams is essential. To stream...
18/12/2025
TMT Insights' new upstream media supply chain platform, Focus, was selected as a winner in the 2025 Media & Entertainment: Best in Market Awards in the TV T...
18/12/2025
Clear-Com is proud to announce its continued role as the official intercom supplier for the Yamaha Grand Plaza Stage at The 2026 NAMM Show, taking place Januar...
18/12/2025
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
18/12/2025
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
18/12/2025
Share Share by:
Copy link
Facebook
X
Whatsapp
Pinterest
Flipboard...
18/12/2025
Long-term agreement includes the SES SCORE platform and hybrid distribution worldwide to deliver more than 5,000 hours of golf tournaments annually featuring th...
18/12/2025
NVIDIA will join the U.S. Department of Energy's (DOE) Genesis Mission as a ...
18/12/2025
Talk formats require careful clock management and system tools to ensure audio content aligns as intended. WO Automation for Radio's Segment Rulesets provid...
18/12/2025
By Toni Coonce, CEO, WideOrbit As 2025 comes to a close, I find myself reflecting on how much WideOrbit has evolved, not only in products and solutions but also...
18/12/2025
18 Dec 2025
VEON Upgraded to Nasdaq Global Select Market, Enhancing Investor Visibility Dubai, December 18, 2025 - VEON Ltd. (Nasdaq: VEON), a global digital o...
18/12/2025
December 18th, 2025
Tribeca X Launches Inaugural Advisory Council, Teases 202...
18/12/2025
December 18th, 2025
As Tribeca Celebrates Its 25th Anniversary, Festival Expa...
18/12/2025
Thursday 18 December 2025
Sky Sports remains the exclusive home of the Masters ...
18/12/2025
Back to All News
Teaser for Can This Love Be Translated' Previews a Heartw...
18/12/2025
Using the additive process of 3D printing, layer after layer gets printed until an object is as close to the final shape needed as possible. Historically, machi...
18/12/2025
In 2025, RT proudly supported 185 arts and cultural events across the island of Ireland, reflecting significant growth since the scheme was re-launched in 2014...
18/12/2025
RT Sports Awards 2025 live on RT One and RT Player at 8:05pm on Saturday 20 December
On Saturday 20 December live on RT One and RT Player at the earlier t...
18/12/2025
RT lyric fm presents a very special Winter Solstice edition of Ambient Orbit, l...
18/12/2025
Top-notch options for AI at the desktops of developers, engineers and designers ...