
Editor's note: This post is part of the AI Decoded series, which demystifies AI by making the technology more accessible, and showcases new hardware, software, tools and accelerations for GeForce RTX PC and NVIDIA RTX workstation users.
From games and content creation apps to software development and productivity tools, AI is increasingly being integrated into applications to enhance user experiences and boost efficiency.
Those efficiency boosts extend to everyday tasks, like web browsing. Brave, a privacy-focused web browser, recently launched a smart AI assistant called Leo AI that, in addition to providing search results, helps users summarize articles and videos, surface insights from documents, answer questions and more.
Leo AI helps users summarize articles and videos, surface insights from documents, answer questions and more. The technology behind Brave and other AI-powered tools is a combination of hardware, libraries and ecosystem software that's optimized for the unique needs of AI.
Why Software Matters NVIDIA GPUs power the world's AI, whether running in the data center or on a local PC. They contain Tensor Cores, which are specifically designed to accelerate AI applications like Leo AI through massively parallel number crunching - rapidly processing the huge number of calculations needed for AI simultaneously, rather than doing them one at a time.
But great hardware only matters if applications can make efficient use of it. The software running on top of GPUs is just as critical for delivering the fastest, most responsive AI experience.
The first layer is the AI inference library, which acts like a translator that takes requests for common AI tasks and converts them to specific instructions for the hardware to run. Popular inference libraries include NVIDIA TensorRT, Microsoft's DirectML and the one used by Brave and Leo AI via Ollama, called llama.cpp.
Llama.cpp is an open-source library and framework. Through CUDA - the NVIDIA software application programming interface that enables developers to optimize for GeForce RTX and NVIDIA RTX GPUs - provides Tensor Core acceleration for hundreds of models, including popular large language models (LLMs) like Gemma, Llama 3, Mistral and Phi.
On top of the inference library, applications often use a local inference server to simplify integration. The inference server handles tasks like downloading and configuring specific AI models so that the application doesn't have to.
Ollama is an open-source project that sits on top of llama.cpp and provides access to the library's features. It supports an ecosystem of applications that deliver local AI capabilities. Across the entire technology stack, NVIDIA works to optimize tools like Ollama for NVIDIA hardware to deliver faster, more responsive AI experiences on RTX.
Applications like Brave's Leo AI can access RTX-powered AI acceleration to enhance user experiences. NVIDIA's focus on optimization spans the entire technology stack - from hardware to system software to the inference libraries and tools that enable applications to deliver faster, more responsive AI experiences on RTX.
Local vs. Cloud Brave's Leo AI can run in the cloud or locally on a PC through Ollama.
There are many benefits to processing inference using a local model. By not sending prompts to an outside server for processing, the experience is private and always available. For instance, Brave users can get help with their finances or medical questions without sending anything to the cloud. Running locally also eliminates the need to pay for unrestricted cloud access. With Ollama, users can take advantage of a wider variety of open-source models than most hosted services, which often support only one or two varieties of the same AI model.
Users can also interact with models that have different specializations, such as bilingual models, compact-sized models, code generation models and more.
RTX enables a fast, responsive experience when running AI locally. Using the Llama 3 8B model with llama.cpp, users can expect responses up to 149 tokens per second - or approximately 110 words per second. When using Brave with Leo AI and Ollama, this means snappier responses to questions, requests for content summaries and more.
NVIDIA internal throughput performance measurements on NVIDIA GeForce RTX GPUs, featuring a Llama 3 8B model with an input sequence length of 100 tokens, generating 100 tokens. Get Started With Brave With Leo AI and Ollama Installing Ollama is easy - download the installer from the project's website and let it run in the background. From a command prompt, users can download and install a wide variety of supported models, then interact with the local model from the command line.
For simple instructions on how to add local LLM support via Ollama, read the company's blog. Once configured to point to Ollama, Leo AI will use the locally hosted LLM for prompts and queries. Users can also switch between cloud and local models at any time.
Brave with Leo AI running on Ollama and accelerated by RTX is a great way to get more out of your browsing experience. You can even summarize and ask questions about AI Decoded blogs! Developers can learn more about how to use Ollama and llama.cpp in the NVIDIA Technical Blog.
Generative AI is transforming gaming, videoconferencing and interactive experiences of all kinds. Make sense of what's new and what's next by subscribing to the AI Decoded newsletter.
Most recent headlines
05/01/2027
Worlds first 802.15.4ab-UWB chip verified by Calterah and Rohde & Schwarz to be ...
01/06/2026
January 6 2026, 05:30 (PST) Dolby Sets the New Standard for Premium Entertainment at CES 2026
Throughout the week, Dolby brings to life the latest innovatio...
02/05/2026
Dalet, a leading technology and service provider for media-rich organizations, t...
01/05/2026
January 5 2026, 18:30 (PST) NBCUniversal's Peacock to Be First Streamer to ...
01/04/2026
January 4 2026, 18:00 (PST) DOLBY AND DOUYIN EMPOWER THE NEXT GENERATON OF CREATORS WITH DOLBY VISION
Douyin Users Can Now Create And Share Videos With Stun...
12/03/2026
Staines-upon-Thames, UK, 11th March, 2026 - Yospace, the trusted leader in Dyna...
12/03/2026
Utah Scientific Expands Technology Partner Program With Integrations From Audina...
12/03/2026
Techex, a global expert in live video solutions over IP and cloud, announces the appointment of Matt McKee as Senior Director, Sales, Americas, further strength...
12/03/2026
KOKUSAI DENKI Electric America has appointed Mondae Hott as Regional Sales Manag...
12/03/2026
At the 2026 NAB Show, Interra Systems will showcase its latest advancements in a...
12/03/2026
The 15th National Games of China concluded after a two-week celebration of athletic excellence and regional collaboration. Held from Nov. 9-21 across Guangdong,...
12/03/2026
Live-production academic program Butler Sports Live produced a total of 40 fall-...
12/03/2026
The University of Nebraska's HuskerVision has completed the second phase of ...
12/03/2026
Grass Valley and integration partner Tab M Solutions have completed Phase 1 of a...
12/03/2026
The broadcaster expands its campus-production model as the university handles tw...
12/03/2026
Disney has announced the addition of March Madness - the NCAA Division I Men...
12/03/2026
Apple TV's Friday Night Baseball MLB doubleheader series returns for its f...
12/03/2026
The senior from New Jersey is making his mark in South Bend, both on the mic and behind it...
12/03/2026
After a relatively quiet January, the month of February was jammed packed with l...
12/03/2026
Long-time production partner Echo Entertainment is producing the broadcast, while Cosm played a vital role in the collaboration...
12/03/2026
By Jessica Herndon
We love kicking off each year by introducing the world to po...
12/03/2026
Samrat Chakrabarti, George Basil, Kiran Deol, Katie McCuen and Vishal Vijayakumar attend the 2025 Sundance Film Festival premiere of Didn't Die at the Lib...
12/03/2026
In Latin America, women are shaping music and defining its future. To kick off t...
12/03/2026
En Am rica Latina, las mujeres est n moldeando la m sica y definiendo su futuro....
12/03/2026
Let's turn back the clock 20 years: The music landscape was a world away fro...
12/03/2026
Bad Bunny is no stranger to Spotify's Billions Club. In fact, he has a whopp...
12/03/2026
Spotify was at the London Book Fair this week, joining conversations across the publishing industry about how people can make reading part of their daily lives....
12/03/2026
Mastering tool improves mono compatibility
Tokyo Dawn Labs' Ohlhorst Digital range is a series of mastering-focused plug-ins developed by Jan Ohlhorst, ...
12/03/2026
Wave FX processor integrated into four products
Lewitt have teamed up with Elgato to create a new processor for the company's Wave Next product range, i...
12/03/2026
Free tool for annotating audio files
Mix Notes is a new, free iOS App that provides users with a simple way to annotate their audio files. It's been cre...
12/03/2026
Side-chain ducking tool gets an upgrade
Devious Machines' popular side-chaining and envelope-shaping tool has just been kitted out with an improved enve...
12/03/2026
Ceremony to take place on 16 April 2026
The MPG (Music Producers Guild) have revealed the full shortlist for this year's MPG Awards, which will be takin...
12/03/2026
Emulates three classic dbx 160 variants
The latest arrival to Overloud's Gem Series plug-in range faithfully recreates not one, but three versions of th...
12/03/2026
New granular soft synth announced
Said to be their most advanced software synthesizer to date, Baby Audio's latest release has been built on a new granu...
12/03/2026
Latest version now live!
Edit 11 March 2026 - Bitwig Studio 6 is now live, and available for all to download!
The latest version of Bitwig's DAW softwa...
12/03/2026
Latest free eBook now available!
Designed for recording engineers, audio-technology students and technically minded musicians, our latest free eBook deliver...
12/03/2026
AFL and NITV partner to launch new First Nations led program Inside the Huddle&...
12/03/2026
Rohde & Schwarz Cybersecurity expands SITLine network encryptor portfolio - more...
12/03/2026
Rohde & Schwarz to showcase future-proof EMC testing solutions at EMV 2026 Rohde & Schwarz will participate in EMV 2026, Europe's premier trade fair and c...
12/03/2026
Johannesburg, 11 March 2026 - The 19th Annual South African Film and Television ...
12/03/2026
MELBOURNE, Fla., March 11, 2026 - L3Harris Technologies (NYSE: LHX) and Shield AI have successfully demonstrated a first-of-its-kind integration combining L3Har...
12/03/2026
The incorporation of Artificial Intelligence and Machine Learning into modern, converged all-domain systems is enabling true Joint Electromagnetic Spectrum Oper...
12/03/2026
MELBOURNE, Fla., March 12, 2026 - L3Harris Technologies (NYSE: LHX) today announ...
12/03/2026
Modern media operations demand a platform that unites automation, orchestration, and human oversight without compromise. In this post, we explore the six key te...
12/03/2026
A deep dive into the platform
Architecture The Blue Lucy platform follows a distributed microservices architecture, meaning the overall operational capability...
12/03/2026
Orchestration platform enables broadcasters to deploy multiple AI models safely with full auditability, rights protection, and regulatory oversight.
LONDON, En...
12/03/2026
Cost pressures, switching intent and demand for savings and credit products are ...
12/03/2026
For the first time, Nielsen breaks out demographic information about FAST and AV...
12/03/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
12/03/2026
Appear's high-performance, ultra-low latency encoding platform augments LTN's fully managed global IP network and orchestration platform
LTN, a leader ...