
Editor's note: This post is part of the AI Decoded series, which demystifies AI by making the technology more accessible, and showcases new hardware, software, tools and accelerations for GeForce RTX PC and NVIDIA RTX workstation users.
From games and content creation apps to software development and productivity tools, AI is increasingly being integrated into applications to enhance user experiences and boost efficiency.
Those efficiency boosts extend to everyday tasks, like web browsing. Brave, a privacy-focused web browser, recently launched a smart AI assistant called Leo AI that, in addition to providing search results, helps users summarize articles and videos, surface insights from documents, answer questions and more.
Leo AI helps users summarize articles and videos, surface insights from documents, answer questions and more. The technology behind Brave and other AI-powered tools is a combination of hardware, libraries and ecosystem software that's optimized for the unique needs of AI.
Why Software Matters NVIDIA GPUs power the world's AI, whether running in the data center or on a local PC. They contain Tensor Cores, which are specifically designed to accelerate AI applications like Leo AI through massively parallel number crunching - rapidly processing the huge number of calculations needed for AI simultaneously, rather than doing them one at a time.
But great hardware only matters if applications can make efficient use of it. The software running on top of GPUs is just as critical for delivering the fastest, most responsive AI experience.
The first layer is the AI inference library, which acts like a translator that takes requests for common AI tasks and converts them to specific instructions for the hardware to run. Popular inference libraries include NVIDIA TensorRT, Microsoft's DirectML and the one used by Brave and Leo AI via Ollama, called llama.cpp.
Llama.cpp is an open-source library and framework. Through CUDA - the NVIDIA software application programming interface that enables developers to optimize for GeForce RTX and NVIDIA RTX GPUs - provides Tensor Core acceleration for hundreds of models, including popular large language models (LLMs) like Gemma, Llama 3, Mistral and Phi.
On top of the inference library, applications often use a local inference server to simplify integration. The inference server handles tasks like downloading and configuring specific AI models so that the application doesn't have to.
Ollama is an open-source project that sits on top of llama.cpp and provides access to the library's features. It supports an ecosystem of applications that deliver local AI capabilities. Across the entire technology stack, NVIDIA works to optimize tools like Ollama for NVIDIA hardware to deliver faster, more responsive AI experiences on RTX.
Applications like Brave's Leo AI can access RTX-powered AI acceleration to enhance user experiences. NVIDIA's focus on optimization spans the entire technology stack - from hardware to system software to the inference libraries and tools that enable applications to deliver faster, more responsive AI experiences on RTX.
Local vs. Cloud Brave's Leo AI can run in the cloud or locally on a PC through Ollama.
There are many benefits to processing inference using a local model. By not sending prompts to an outside server for processing, the experience is private and always available. For instance, Brave users can get help with their finances or medical questions without sending anything to the cloud. Running locally also eliminates the need to pay for unrestricted cloud access. With Ollama, users can take advantage of a wider variety of open-source models than most hosted services, which often support only one or two varieties of the same AI model.
Users can also interact with models that have different specializations, such as bilingual models, compact-sized models, code generation models and more.
RTX enables a fast, responsive experience when running AI locally. Using the Llama 3 8B model with llama.cpp, users can expect responses up to 149 tokens per second - or approximately 110 words per second. When using Brave with Leo AI and Ollama, this means snappier responses to questions, requests for content summaries and more.
NVIDIA internal throughput performance measurements on NVIDIA GeForce RTX GPUs, featuring a Llama 3 8B model with an input sequence length of 100 tokens, generating 100 tokens. Get Started With Brave With Leo AI and Ollama Installing Ollama is easy - download the installer from the project's website and let it run in the background. From a command prompt, users can download and install a wide variety of supported models, then interact with the local model from the command line.
For simple instructions on how to add local LLM support via Ollama, read the company's blog. Once configured to point to Ollama, Leo AI will use the locally hosted LLM for prompts and queries. Users can also switch between cloud and local models at any time.
Brave with Leo AI running on Ollama and accelerated by RTX is a great way to get more out of your browsing experience. You can even summarize and ask questions about AI Decoded blogs! Developers can learn more about how to use Ollama and llama.cpp in the NVIDIA Technical Blog.
Generative AI is transforming gaming, videoconferencing and interactive experiences of all kinds. Make sense of what's new and what's next by subscribing to the AI Decoded newsletter.
North America Stories
04/02/2026
New control solution applies broadcast robotics workflows to PTZ cameras with third-party integration and upgrade paths
Vinten, a global leader in robotic cam...
04/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
04/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
04/02/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
04/02/2026
Katie Vitolins Announced as Vice President of Alumni Products and Services An alumna and former trustee, Vitolins will lead the relaunch of Berklee's alum...
04/02/2026
Editor's note: This post is part of the Nemotron Labs blog series, which exp...
03/02/2026
Tagboard, a modern, interactive graphics system for news, sports, and entertainm...
03/02/2026
Studio Network Solutions (SNS) announces the launch of Trio, a new S3-compatible cloud storage service fully integrated with EVO for media backup, archival, and...
03/02/2026
50 Production Trucks at center of 160 U.S.-based productions...
03/02/2026
Nielsen, which specializes in audience measurement, data, and media intelligence, announces that it is piloting a new methodology enhancement to more accurately...
03/02/2026
Appear's X Platform will be used by NBC Sports to deliver video compression, satellite modulation and transport stream aggregation for its production of the...
03/02/2026
Behind The Mic provides a roundup of recent news regarding on-air talent, includ...
03/02/2026
NHL Network announces its comprehensive programming schedule for the upcoming Winter Olympic Games, underscoring the highly anticipated return of NHL players to...
03/02/2026
NBC Sports has chosen Canon to deliver 115 Canon UHD broadcast lenses for its production of the 2026 Winter Olympics and Paralympics. Canon will also send suppo...
03/02/2026
Audio-Technica equipment will be used by NBC Sports to deliver much of its audio capture requirements across all sporting venues for its production of the 2026 ...
03/02/2026
NBC Sports will use Cisco to deliver AI networking technology for its all-IP production of the 2026 Winter Olympics and Paralympics, including the deployment of...
03/02/2026
NBC Sports will utilize Chyron PRIME CG to produce live broadcast graphics to display names, athlete information, scores, statistics, leaderboards, headshots an...
03/02/2026
NBC Sports will utilize Grass Valley to deliver advanced signal conversion, rout...
03/02/2026
NBC Sports will utilize Planar to deliver leading-edge fine pixel pitch LED video wall technology for its production of the 2026 Winter Olympics and Paralympics...
03/02/2026
Taking place in two venues, the 2026 production celebrates the 150th Anniversary of the Super Bowl of Dogs'
When the Westminster Kennel Club Dog Show conc...
03/02/2026
For the indoor production, NEP Specialty Capture has mounted a 100-ft. overhead ...
03/02/2026
MTVG Edge, the production-services supplier's software-based production solu...
03/02/2026
In the league's second year, new partnerships enhance broadcast quality for ...
03/02/2026
Rocket Surgery will showcase the combined creative strength of Ross Video, with ...
03/02/2026
NBC Sports will utilize Sony Electronics to deliver imaging, monitoring, and tec...
03/02/2026
SMT will provide TVI (broadcast television interface) support services for figur...
03/02/2026
Viper Shield's robust phase of flight-testing production representative hard...
03/02/2026
Countries throughout the Indo-Pacific have made the ability to defend their people and sovereign borders a top priority, but no single nation can monitor every ...
03/02/2026
U.S. Marine loads a round in an 81 mm mortar system during an exercise. (Credit: U.S. Marines)...
03/02/2026
9905MPx Card features unprecedented level of density with four independent signa...
03/02/2026
Champaign, IL - November 15, 2021 - Cobalt Digital, announced today that three of its products have received 2021 NAB Show Product of the Year Awards. Cobalt, ...
03/02/2026
Cobalt is pleased to confirm that Log4j vulnerability does NOT affect ANY Cobalt products including the HPF-9000 frame.
Given the recent concerns regarding wi...
03/02/2026
Software-based embedding and de-embedding solution will be highlighted at NAB 20...
03/02/2026
Series developed to fill an industry need.
Champaign, IL April 13, 2022 Cobalt Digital will introduce a new series of BlueBox Group (BBG) EO/OE 12G mini...
03/02/2026
Company receives Best in Shows from Next TV and TVB Europe, and two Product of the Year awards from NAB
Champaign, IL - May 6, 2022 - Cobalt Digital Inc. head...
03/02/2026
IBC Preview - IBC Stand 10.B44Journalists: Click to visit Cobalt
Cobalt will al...
03/02/2026
IBC Stand 10.B44Journalists: Click to visit Cobalt
Dr. Ciro Noronha, CTO, tapped as a panelist and speaker by many - and then there's the party!
Amsterda...
03/02/2026
IBC Stand 10.B44Journalists: Click to visit Cobalt
Growth Drives Cobalt Digital to Promote Dr. Ciro Noronha to CTO, and Appoint Anthony Tan as Director of Sale...
03/02/2026
IBC Stand 10.B44Journalists: Click to visit Cobalt
Dr. Ciro Noronha, CTO, tappe...
03/02/2026
Manufacturer plans also include solutions that provide a path to the cloud and redundancy updatesCABSAT Stand S1-I42Journalists: Click to visit Cobalt
Dubai - ...
03/02/2026
Award-Winning Product Portfolio is Being Presented on Smart Video Group Stand, and Company's Berend Blokzijl is Scheduled to Share RIST Insight on Innovatio...
03/02/2026
NEW Location IBC Stand 10.F42 Same Hall, NEW StandJournalists: Click to visi...
03/02/2026
TVBEurope spoke with our very own Dr. Ciro Noronha about RIST and more.Reliable Internet Stream Transport (RIST) is growing in popularity for contribution and d...
03/02/2026
NEW Location IBC Stand 10.F42 Same Hall, NEW StandJournalists: Click to visi...
03/02/2026
NEW Location IBC Stand 10.F42 Same Hall, NEW StandJournalists: Click to visit CobaltCatch Cobalt's Blue WAVE of Innovation in a New LocationHighlights t...
03/02/2026
Cobalt Digital Scores Best of Show Awards at IBC 2023 from TVB Europe and TV Technology
Amsterdam, 2 October 2023 - Cobalt Digital has announced that it has w...
03/02/2026
NAB NY Booth: 1126Cobalt Digital to Present New PACIFIC Ultra-Low Latency 4K HEVC Encoder / Decoder Platform at NAB NY 2023Cobalt's NAB NY Plans Have Live P...
03/02/2026
Cobalt's Inter BEE Line-up Targets 4K 2110 Processing with INDIGO Option and Offers High-Density Routing Solutions with New WAVE SeriesNew Mini Converters a...
03/02/2026
Cobalt Digital's NAB 2024 Plans Include a Simplified Path to ATSC 3.0
Other Highlights include Enhanced IP Multiviewers, Expanded Router and Control Panel ...
03/02/2026
Featuring a Simplified Path to ATSC 3.0, Significant Support for ST 2110, and a New DANTE Embedder/De-Embedder with Frame SyncWAVE family of routers and expande...