Faster Than a Click: Hyperlink Agent Search Now Available on NVIDIA RTX PCs

Large language model (LLM)-based AI assistants are powerful productivity tools, but without the right context and information, they can struggle to provide nuanced, relevant answers. While most LLM-based chat apps allow users to supply a few files for context, they often don’t have access to all the information buried across slides, notes, PDFs and images Read Article

Large language model (LLM)-based AI assistants are powerful productivity tools, but without the right context and information, they can struggle to provide nuanced, relevant answers. While most LLM-based chat apps allow users to supply a few files for context, they often don’t have access to all the information buried across slides, notes, PDFs and images in a user’s PC.

Nexa.ai’s Hyperlink is a local AI agent that addresses this challenge. It can quickly index thousands of files, understand the intent of a user’s question and provide contextual, tailored insights.

A new version of the app, available today, includes accelerations for NVIDIA RTX AI PCs, tripling retrieval-augmented generation indexing speed. For example, a dense 1GB folder that would previously take almost 15 minutes to index can now be ready for search in just four to five minutes. In addition, LLM inference is accelerated by 2x for faster responses to user queries.

Hyperlink on NVIDIA RTX AI PCs delivers up to 3x faster indexing and 2x faster LLM inference. Benchmarked on an RTX 5090 using a test dataset; indexing measured as total index time, inference measured in tokens per second.

Turn Local Data Into Instant Intelligence

Hyperlink uses generative AI to search thousands of files for the right information, understanding the intent and context of a user’s query, rather than merely matching keywords.

To do this, it creates a searchable index of all local files a user indicates — whether a small folder or every single file on a computer. Users can describe what they’re looking for in natural language and find relevant content across documents, slides, PDFs and images.

For example, if a user asks for help with a “Sci-Fi book report comparing themes between two novels,” Hyperlink can find the relevant information — even if it’s saved in a file named “Lit_Homework_Final.docx.”

Combining search with the reasoning capabilities of RTX-accelerated LLMs, Hyperlink then  answers questions based on insights from a user’s files. It connects ideas across sources, identifies relationships between documents and generates well-reasoned answers with clear citations.

All user data stays on the device and is kept private. This means personal files never leave the computer, so users don’t have to worry about sensitive information being sent to the cloud. They get the benefits of powerful AI without sacrificing control or peace of mind.

Hyperlink is already being adopted by professionals, students and creators to:

  • Prepare for meetings: Summarize key discussion points across notes and transcripts.
  • Analyze reports: Get well-researched answers, citing key data points from across industry reports.
  • Create content: Compile writing or video ideas from years of saved notes and drafts.
  • Study smarter: Review a key concept while cramming for a test, searching through lecture notes, slides and tutorials — all at once.
  • Organize receipts: Sort scanned documents and automatically complete expense reports.
  • Debug faster: Search across documentation and comments in code to resolve errors or version conflicts.

Download the Hyperlink app to start experimenting with AI search on RTX PCs.

Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter. Follow NVIDIA Workstation on LinkedIn and X

See notice regarding software product information.