
Editor's note: This post is part of the AI Decoded series, which demystifies AI by making the technology more accessible, and showcases new hardware, software, tools and accelerations for GeForce RTX PC and NVIDIA RTX workstation users.
From games and content creation apps to software development and productivity tools, AI is increasingly being integrated into applications to enhance user experiences and boost efficiency.
Those efficiency boosts extend to everyday tasks, like web browsing. Brave, a privacy-focused web browser, recently launched a smart AI assistant called Leo AI that, in addition to providing search results, helps users summarize articles and videos, surface insights from documents, answer questions and more.
Leo AI helps users summarize articles and videos, surface insights from documents, answer questions and more. The technology behind Brave and other AI-powered tools is a combination of hardware, libraries and ecosystem software that's optimized for the unique needs of AI.
Why Software Matters NVIDIA GPUs power the world's AI, whether running in the data center or on a local PC. They contain Tensor Cores, which are specifically designed to accelerate AI applications like Leo AI through massively parallel number crunching - rapidly processing the huge number of calculations needed for AI simultaneously, rather than doing them one at a time.
But great hardware only matters if applications can make efficient use of it. The software running on top of GPUs is just as critical for delivering the fastest, most responsive AI experience.
The first layer is the AI inference library, which acts like a translator that takes requests for common AI tasks and converts them to specific instructions for the hardware to run. Popular inference libraries include NVIDIA TensorRT, Microsoft's DirectML and the one used by Brave and Leo AI via Ollama, called llama.cpp.
Llama.cpp is an open-source library and framework. Through CUDA - the NVIDIA software application programming interface that enables developers to optimize for GeForce RTX and NVIDIA RTX GPUs - provides Tensor Core acceleration for hundreds of models, including popular large language models (LLMs) like Gemma, Llama 3, Mistral and Phi.
On top of the inference library, applications often use a local inference server to simplify integration. The inference server handles tasks like downloading and configuring specific AI models so that the application doesn't have to.
Ollama is an open-source project that sits on top of llama.cpp and provides access to the library's features. It supports an ecosystem of applications that deliver local AI capabilities. Across the entire technology stack, NVIDIA works to optimize tools like Ollama for NVIDIA hardware to deliver faster, more responsive AI experiences on RTX.
Applications like Brave's Leo AI can access RTX-powered AI acceleration to enhance user experiences. NVIDIA's focus on optimization spans the entire technology stack - from hardware to system software to the inference libraries and tools that enable applications to deliver faster, more responsive AI experiences on RTX.
Local vs. Cloud Brave's Leo AI can run in the cloud or locally on a PC through Ollama.
There are many benefits to processing inference using a local model. By not sending prompts to an outside server for processing, the experience is private and always available. For instance, Brave users can get help with their finances or medical questions without sending anything to the cloud. Running locally also eliminates the need to pay for unrestricted cloud access. With Ollama, users can take advantage of a wider variety of open-source models than most hosted services, which often support only one or two varieties of the same AI model.
Users can also interact with models that have different specializations, such as bilingual models, compact-sized models, code generation models and more.
RTX enables a fast, responsive experience when running AI locally. Using the Llama 3 8B model with llama.cpp, users can expect responses up to 149 tokens per second - or approximately 110 words per second. When using Brave with Leo AI and Ollama, this means snappier responses to questions, requests for content summaries and more.
NVIDIA internal throughput performance measurements on NVIDIA GeForce RTX GPUs, featuring a Llama 3 8B model with an input sequence length of 100 tokens, generating 100 tokens. Get Started With Brave With Leo AI and Ollama Installing Ollama is easy - download the installer from the project's website and let it run in the background. From a command prompt, users can download and install a wide variety of supported models, then interact with the local model from the command line.
For simple instructions on how to add local LLM support via Ollama, read the company's blog. Once configured to point to Ollama, Leo AI will use the locally hosted LLM for prompts and queries. Users can also switch between cloud and local models at any time.
Brave with Leo AI running on Ollama and accelerated by RTX is a great way to get more out of your browsing experience. You can even summarize and ask questions about AI Decoded blogs! Developers can learn more about how to use Ollama and llama.cpp in the NVIDIA Technical Blog.
Generative AI is transforming gaming, videoconferencing and interactive experiences of all kinds. Make sense of what's new and what's next by subscribing to the AI Decoded newsletter.
North America Stories
15/05/2026
Berklee Announces Lineup for Inaugural AI Music Summit The three-day event puts musicians at the center of the future of music creation, ethics, and the indus...
15/05/2026
Lightware returns to InfoComm 2026 with a focused showcase of scalable USB-C connectivity, next-generation AV-over-IP solutions, and technologies that help over...
15/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
15/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
15/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
15/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
15/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
15/05/2026
Delivering a live, arena-scale production of a massively popular band is no small feat. Between expansive in-arena LED walls and a global live stream fed to onl...
15/05/2026
Connection is the heartbeat of any strong community, and with live streaming becoming more accessible in the modern era, it's much easier for faith-based or...
15/05/2026
Powered by GX 3 media servers, optimised IP-VFC workflows and on-site engineering expertise, the production delivers high-performance visuals for one of the wor...
14/05/2026
Sweetwater and Airstream have announced a custom-built Dolby Atmos mobile recording studio inside an Airstream trailer, set to tour music festivals, schools, tr...
14/05/2026
The American Association of Professional Baseball (AAPB) has announced a new par...
14/05/2026
ESPN has announced plans to transform Santa Monica Beach into a broadcast hub du...
14/05/2026
Amagi has announced a significant update to Amagi CLOUDPORT, its cloud-based broadcast playout platform. The update includes 250-plus features shipped in FY25-2...
14/05/2026
Clear-Com will exhibit at InfoComm 2026 (Booth N7005, June 17-19, Las Vegas Convention Center), introducing a new product that builds on Arcadia Central Station...
14/05/2026
Ikegami will exhibit at BroadcastAsia 2026 (Stand 5D3-1, Singapore Expo, May 20-22), introducing two new viewfinders alongside its existing camera, control, and...
14/05/2026
Grass Valley has announced that dB Broadcast has delivered new IP-based outside ...
14/05/2026
NAGRAVISION, a Kudelski Group company, has announced a partnership with the World Professional Billiards and Snooker Association (WPBSA) to launch Play Snooker,...
14/05/2026
Belden Inc. has announced a definitive agreement to acquire RUCKUS Networks from Vistance Networks for approximately $1.85 billion. The transaction has been app...
14/05/2026
NVIDIA has released the Content Localization Blueprint, a modular reference arch...
14/05/2026
Disney has announced that Disney will be the exclusive U.S. streaming home of the Banana Bowl, the Banana Ball league season championship, streaming live this ...
14/05/2026
Arkona technologies and technology partner manifold will demonstrate their production solutions on the Magna Systems and Engineering stand (Booth 5D1-1) at Broa...
14/05/2026
Haivision will host a webinar on Thursday, May 21 at 10 a.m. ET / 4 p.m. CET cov...
14/05/2026
The CW Network and ESPN have announced a sublicense broadcast agreement for The CW to televise ACC football and men's and women's college basketball gam...
14/05/2026
The agreement marks Scripps Sports' first NBA local rights deal...
14/05/2026
A new report from education-technology company Wiingy testing post-ChatGPT predictions against three years of real-world data has identified broadcasting as one...
14/05/2026
Global Citizen and FIFA have announced that Madonna, Shakira, and BTS will headl...
14/05/2026
LOS ANGELES, CA, May 14, 2026 - The nonprofit Sundance Institute announced today the cohort selected for the 2026 Episodic Lab program, taking place at Dunaway ...
14/05/2026
Soldiers equipped with Falcon IV radios will soon gain a sense-and-protect capa...
14/05/2026
Artists concept of the L3Harris Next Gen RTG in flight configuration, designed to provide 250 watts of reliable power for decades-long missions in deep space....
14/05/2026
Car ad spend rises sharply in March as more auto buyers turn to electric, hybrid...
14/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
14/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
14/05/2026
CueScript's CueiT 4.0 Wins Future's Best of Show Award, Presented at 2026 NAB Show by TV Tech
CueScript, a leading international developer of professio...
14/05/2026
Expert-Led Education Sessions and Development of Online Training Program Accelerate IPMX Adoption and Deployment
The Alliance for IP Media Solutions (AIMS) to...
14/05/2026
Klvr is launching in the United States with a professional-grade rechargeable battery solution that cuts costs and improves performance across live entertainmen...
14/05/2026
Shooting into the depths of Bedlam with URSA Cine 17K 65
Brie Clayton May 14, 2026
0 Comments
Indie feature film paired digital 65mm capture with a Bl...
14/05/2026
WeMakeColor expands with Baselight, becoming hybrid color facility
Caroline Shawley May 14, 2026
0 Comments
Boutique Mexican-based studio integrates B...
14/05/2026
Berklee's Summer in the City Returns with Free Concerts Throughout Boston Ar...
14/05/2026
Chelsey Green Named to Billboard's 2026 Women in Music List The Berklee professor and chair of the Recording Academy Board of Trustees joins other high-pr...
14/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
14/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
14/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
14/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
14/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
14/05/2026
Share
Copy link
Facebook
X
Linkedin
Bluesky
Email...
14/05/2026
Six decades of products built around the people who use them...
14/05/2026
Designed to embrace multiple production processes and deliver high-end live broadcast workflows, Vivid Broadcast has combined Calrec True Control 2.0 enabled co...
14/05/2026
Today, Jigsaw24, the UK's leading media equipment supplier and systems integrator, announces a new partnership with EVS, a global leader in live video techn...
14/05/2026
Berklee Convenes Leaders in AI, Music for Inaugural AIMS Symposium The three-day event puts musicians at the center of the future of music creation, ethics, a...