iToverDose/Software· 14 MAY 2026 · 16:30

How GitHub Issues delivers instant navigation with client-side caching

GitHub rebuilt its issue tracker to feel instant by shifting data to the client, cutting perceived load times below 200ms for millions of daily users.

GitHub Blog4 min read0 Comments

Developers know the frustration: closing an issue, clicking a link, then waiting for the next page to load. Even brief delays disrupt focus, turning navigation from a routine task into a context switch. GitHub Issues faced this problem—users weren’t complaining about missing features, but about speed. The bottleneck wasn’t the backend or server rendering; it was the architecture itself, which treated every navigation as a fresh request even when users were revisiting data they’d already loaded.

To fix this, the Issues Performance team redesigned the entire navigation flow. Instead of optimizing servers or chasing backend latency reductions, they moved the heavy lifting to the client. The core idea was simple: render instantly using locally cached data, then quietly revalidate in the background without breaking the user’s flow. This approach required three major changes: a client-side caching layer built on IndexedDB, a preheating strategy to boost cache hit rates, and a service worker to keep cached data available even during hard page reloads.

The result? Millions of daily navigations now feel instantaneous. For teams managing complex codebases or planning AI-assisted workflows, this shift from "loads in a second" to "feels instant" changes how work gets done.

Why latency is now a product-defining metric for developer tools

In 2026, speed isn’t just a nice-to-have—it’s table stakes. When developers juggle multiple issues, review pull requests, or debug production problems, every unnecessary delay fractures concentration. The best tools don’t just work; they disappear into the background, allowing users to stay in flow. GitHub Issues, used by millions weekly, needed to meet this standard.

The team realized that metrics like page load time or server response latency didn’t capture what users actually experienced. A page might technically "load" in two seconds, but if the issue title—the content developers care about most—appears a second and a half into that load, the experience still feels sluggish. To measure real-world performance, GitHub introduced HPC (Highest Priority Content), an internal metric aligned with Web Vitals’ LCP (Largest Contentful Paint). HPC tracks when the primary content—often the issue title or body—first appears on screen, giving a true measure of perceived speed.

They set practical thresholds to classify navigation quality:

  • Instant: HPC under 200ms, where interactions feel immediate.
  • Fast: HPC under 1000ms, acceptable but no longer invisible.
  • Slow: HPC 1000ms or higher, where users notice delays.

The goal wasn’t just to eliminate the worst outliers but to shift the entire distribution toward "instant" and "fast." This meant optimizing for the median experience, not just the 99th percentile.

Mapping real navigation patterns to architectural bottlenecks

Not all navigations are equal. A full browser refresh costs more than a client-side transition within the same app. Before making changes, the team needed to understand how users actually moved through Issues. They identified three navigation types, each with distinct performance characteristics:

  • Hard navigation: A full page reload, including server rendering, asset downloads, JavaScript execution, and React hydration. This is the slowest path.
  • Turbo navigation: A Rails Turbo transition that updates page regions without a full reload, relying on server-rendered responses.
  • Soft navigation: A client-side transition within the React runtime, avoiding full page bootstrap costs.

At the start of the initiative, hard navigations dominated the traffic mix—accounting for over half of all sessions. This revealed a critical insight: even when users were revisiting familiar data, the system still paid the full cost of a fresh load. The architecture wasn’t optimized for revisits; it assumed every navigation was unique. This explained why users felt the tool was "heavy" despite its feature richness.

Building a client-side cache that feels like magic

The solution required rethinking how data is stored and retrieved. The team built a client-side caching layer using IndexedDB, a browser-based database designed for large amounts of structured data. This allowed the app to store issue data locally after the first visit, making subsequent navigations instantaneous.

But caching alone wasn’t enough. To maximize hit rates, they implemented a preheating strategy—proactively loading data into the cache before users needed it. For example, when a developer hovered over an issue link, the system would fetch and cache the issue’s metadata preemptively. This reduced the chance of a cache miss during actual navigation.

The final piece was the service worker, a background script that intercepts network requests. If a hard navigation occurred (like a browser refresh), the service worker could serve cached data immediately while silently revalidating in the background. This ensured continuity: users never saw a blank screen, even if the cache was slightly stale.

The tradeoffs and what’s next for instant navigation

This approach isn’t free. Client-side caching increases memory usage, and preheating consumes bandwidth. The service worker adds complexity to the codebase. But for a product where speed is a core differentiator, these tradeoffs are justified. The team also acknowledges that "instant" isn’t a one-time achievement—it’s an ongoing commitment. Future work includes expanding the caching strategy to other parts of the platform and refining preheating rules to reduce unnecessary data fetching.

For engineering teams building data-heavy web apps, the lessons here are clear: shift work to the client where possible, optimize for perceived latency over raw backend speed, and measure what users actually care about. The goal isn’t just to load faster—it’s to make speed invisible.

As AI-assisted workflows reshape how developers work, tools like GitHub Issues must evolve from being "good enough" to feeling effortless. The shift from latency to instant isn’t just a technical upgrade—it’s a fundamental redefinition of product quality.

AI summary

GitHub Issues, yük zamanını azaltmak ve anında yükleme sağlamak için modernizasyon geçiriyor. Geliştiriciler, iş akışlarını bozmadan hızlı bir şekilde sorunları çözebiliyorlar.

Comments

00
LEAVE A COMMENT
ID #U4404W

0 / 1200 CHARACTERS

Human check

7 + 6 = ?

Will appear after editor review

Moderation · Spam protection active

No approved comments yet. Be first.