
Editor's note: This post is part of the AI Decoded series, which demystifies AI by making the technology more accessible, and showcases new hardware, software, tools and accelerations for GeForce RTX PC and NVIDIA RTX workstation users.
From games and content creation apps to software development and productivity tools, AI is increasingly being integrated into applications to enhance user experiences and boost efficiency.
Those efficiency boosts extend to everyday tasks, like web browsing. Brave, a privacy-focused web browser, recently launched a smart AI assistant called Leo AI that, in addition to providing search results, helps users summarize articles and videos, surface insights from documents, answer questions and more.
Leo AI helps users summarize articles and videos, surface insights from documents, answer questions and more. The technology behind Brave and other AI-powered tools is a combination of hardware, libraries and ecosystem software that's optimized for the unique needs of AI.
Why Software Matters NVIDIA GPUs power the world's AI, whether running in the data center or on a local PC. They contain Tensor Cores, which are specifically designed to accelerate AI applications like Leo AI through massively parallel number crunching - rapidly processing the huge number of calculations needed for AI simultaneously, rather than doing them one at a time.
But great hardware only matters if applications can make efficient use of it. The software running on top of GPUs is just as critical for delivering the fastest, most responsive AI experience.
The first layer is the AI inference library, which acts like a translator that takes requests for common AI tasks and converts them to specific instructions for the hardware to run. Popular inference libraries include NVIDIA TensorRT, Microsoft's DirectML and the one used by Brave and Leo AI via Ollama, called llama.cpp.
Llama.cpp is an open-source library and framework. Through CUDA - the NVIDIA software application programming interface that enables developers to optimize for GeForce RTX and NVIDIA RTX GPUs - provides Tensor Core acceleration for hundreds of models, including popular large language models (LLMs) like Gemma, Llama 3, Mistral and Phi.
On top of the inference library, applications often use a local inference server to simplify integration. The inference server handles tasks like downloading and configuring specific AI models so that the application doesn't have to.
Ollama is an open-source project that sits on top of llama.cpp and provides access to the library's features. It supports an ecosystem of applications that deliver local AI capabilities. Across the entire technology stack, NVIDIA works to optimize tools like Ollama for NVIDIA hardware to deliver faster, more responsive AI experiences on RTX.
Applications like Brave's Leo AI can access RTX-powered AI acceleration to enhance user experiences. NVIDIA's focus on optimization spans the entire technology stack - from hardware to system software to the inference libraries and tools that enable applications to deliver faster, more responsive AI experiences on RTX.
Local vs. Cloud Brave's Leo AI can run in the cloud or locally on a PC through Ollama.
There are many benefits to processing inference using a local model. By not sending prompts to an outside server for processing, the experience is private and always available. For instance, Brave users can get help with their finances or medical questions without sending anything to the cloud. Running locally also eliminates the need to pay for unrestricted cloud access. With Ollama, users can take advantage of a wider variety of open-source models than most hosted services, which often support only one or two varieties of the same AI model.
Users can also interact with models that have different specializations, such as bilingual models, compact-sized models, code generation models and more.
RTX enables a fast, responsive experience when running AI locally. Using the Llama 3 8B model with llama.cpp, users can expect responses up to 149 tokens per second - or approximately 110 words per second. When using Brave with Leo AI and Ollama, this means snappier responses to questions, requests for content summaries and more.
NVIDIA internal throughput performance measurements on NVIDIA GeForce RTX GPUs, featuring a Llama 3 8B model with an input sequence length of 100 tokens, generating 100 tokens. Get Started With Brave With Leo AI and Ollama Installing Ollama is easy - download the installer from the project's website and let it run in the background. From a command prompt, users can download and install a wide variety of supported models, then interact with the local model from the command line.
For simple instructions on how to add local LLM support via Ollama, read the company's blog. Once configured to point to Ollama, Leo AI will use the locally hosted LLM for prompts and queries. Users can also switch between cloud and local models at any time.
Brave with Leo AI running on Ollama and accelerated by RTX is a great way to get more out of your browsing experience. You can even summarize and ask questions about AI Decoded blogs! Developers can learn more about how to use Ollama and llama.cpp in the NVIDIA Technical Blog.
Generative AI is transforming gaming, videoconferencing and interactive experiences of all kinds. Make sense of what's new and what's next by subscribing to the AI Decoded newsletter.
Most recent headlines
05/01/2027
Worlds first 802.15.4ab-UWB chip verified by Calterah and Rohde & Schwarz to be ...
01/06/2026
January 6 2026, 05:30 (PST) Dolby Sets the New Standard for Premium Entertainment at CES 2026
Throughout the week, Dolby brings to life the latest innovatio...
02/05/2026
Dalet, a leading technology and service provider for media-rich organizations, t...
01/05/2026
January 5 2026, 18:30 (PST) NBCUniversal's Peacock to Be First Streamer to ...
01/04/2026
January 4 2026, 18:00 (PST) DOLBY AND DOUYIN EMPOWER THE NEXT GENERATON OF CREATORS WITH DOLBY VISION
Douyin Users Can Now Create And Share Videos With Stun...
26/02/2026
Multi-angle coverage, on-demand access to ultra-high-resolution video are provided for replays and clips across multiple distribution channels
The NHL and Cosm...
26/02/2026
Advanced Systems Group, LLC (ASG), a technology and services provider for media creatives and content owners, has appointed Jody Boatwright as Chief Strategy Of...
26/02/2026
As players report to Spring Training, TikTok and MLB announce an expanded content partnership bringing baseball fans around the world closer to the game through...
26/02/2026
Chyron announces its 2026 Designer of the Year Competition to be awarded during a live stream from the 2026 NAB Show in Las Vegas. This year's competition i...
26/02/2026
Appear, which specializes in live production technology, announces that its X Platform has been officially verified by YouTube for Secure Reliable Transport (SR...
26/02/2026
Adder Technology, a specialist in connectivity solutions and high performance IP...
26/02/2026
Harmonic announces that Alcom, a leading telco operator in Finland, is powering its next-generation white-label headend video service with Harmonic's XOS Ad...
26/02/2026
ESPN, Disney , and the Savannah Bananas announce a 25-game exclusive package in ...
26/02/2026
Brazilian broadcaster Globo has returned as the official broadcast partner of th...
26/02/2026
NDI, which concentrates in plug-and-play IP video connectivity, announces a strategic partnership with Jiaruisen (JRS), a Shenzhen-based technology distributor,...
26/02/2026
The Charleston, SC, native has excelled as a live-camera operator on Tigers broadcasts
In the live-sports-video industry, the future is bright. Our series SVG ...
26/02/2026
Behind The Mic provides a roundup of recent news regarding on-air talent, including new deals, departures, and assignments compiled from press releases and repo...
26/02/2026
Supported by partners, the in-venue production team keeps videoboard-show qualit...
26/02/2026
CAMB.AI provides AI speech synthesis and translation with a focus on localizatio...
26/02/2026
AWS Elemental Inference could help sports broadcasters learn to love AI
This we...
26/02/2026
Rohde & Schwarz awarded contract by Israel Airports Authority for QPS201 securit...
26/02/2026
Rohde & Schwarz highlights its unique CMX500 one-box tester tailored for NTN tes...
26/02/2026
Rohde & Schwarz high-efficiency transmitter powers next-gen broadcast services i...
26/02/2026
Rohde & Schwarz highlights its comprehensive embedded systems test solutions at ...
26/02/2026
Rohde & Schwarz to showcase spectrum security and network efficiency solutions a...
26/02/2026
Rohde & Schwarz and Broadcom showcase first Wi-Fi 8 RF signaling tests, paving w...
26/02/2026
Rohde & Schwarz advances AI-RAN testing using digital twins with NVIDIA Rohde & Schwarz, in collaboration with NVIDIA, continues to drive AI-RAN innovation fo...
26/02/2026
Rohde & Schwarz and LITEON demonstrate high throughput 5G femtocell testing with...
26/02/2026
Industry's only IPMX-compliant audio monitor takes centerstage along with IPMX multiviewers, gateways, converters and routers
Champaign, IL - January 26, 2...
26/02/2026
Cobalt among first manufacturers to achieve verified compliance status with IPMX...
26/02/2026
Barcelona - 20 February 2026 Cobalt Digital, the leading designer and manufacturer of award-winning ST 2110 and SDI signal processing products, and a founding...
26/02/2026
The agreement ensures Europe's satellite-based augmentation continues enhanc...
26/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
26/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
26/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
26/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
26/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
26/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
26/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
26/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
26/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
26/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
26/02/2026
With more than four decades of experience in radio broadcasting and live sports production, Daryl Doss, owner of Doss Technical Services and a contract engineer...
26/02/2026
BCNEXXT has deployed live HLG-based HDR playout capabilities within its Vipe platform, enabling broadcasters to integrate High Dynamic Range into live productio...
26/02/2026
TAG Video Systems (Booth W2323) will unveil new capabilities across its IP-native Realtime Media Platform at NAB 2026. New releases include visual service healt...
26/02/2026
IBC today announced a new strategic partnership with EIT Culture & Creativity the institutional partnership for culture and creativity, supported by the Europ...
26/02/2026
Clear-Com kept the action on track at Red Bull Shay'iMoto, an adrenaline-fueled motorsport spinning event that transformed the streets of Durban, South Afr...
26/02/2026
Harmonic (NASDAQ: HLIT) today announced that Alcom, a leading telco operator in Finland, is powering its next-generation white-label headend video service with ...
26/02/2026
Big Blue Marble, a provider of broadcast-grade, cloud-native video solutions for broadcasters, service providers, and content owners, today announced that it ha...
26/02/2026
New approach enables video service providers to deliver multiple live feeds on the same screen with lower costs and improved device compatibility
Broadpeak, a ...