home
// glossary

Web Vitals & Performance Metrics

Pulse tracks six metrics that together describe how a real user experiences your site — how fast it looks ready, how fast it responds, and how stable it feels. This page explains each one, what it means for your users, and what to do when it regresses.

LCP

Largest Contentful Paint

How fast the main content appears

good ≤ 2.50 sneeds ≤ 4.00 spoor > 4.00 s

The time from page load until the largest visible element (hero image, headline, video poster) finishes rendering.

what users feel

Users stare at a blank or half-loaded hero. Many close the tab before the page even appears — bounce rates climb and the brand feels unreliable.

business impact

If LCP is slow, users see a blank or skeleton screen and may leave before the page is usable.

Common causes

  • Slow server response (high TTFB)
  • Large unoptimised hero images or videos
  • Render-blocking JavaScript or CSS
  • Web fonts loading late

What to do

  • Compress and serve hero images as AVIF/WebP, set explicit width/height
  • Preload the LCP image with <link rel="preload" as="image">
  • Move non-critical JS to defer/async, inline critical CSS
  • Use a CDN and HTTP/2 to reduce TTFB
how Pulse measures it

Captured via the browser's PerformanceObserver on entries of type 'largest-contentful-paint'. We report the latest entry observed before the page is hidden or unloaded, which matches the official web-vitals algorithm.

FCP

First Contentful Paint

When the first pixel of content appears

good ≤ 1.80 sneeds ≤ 3.00 spoor > 3.00 s

The moment the browser renders the first text, image, or SVG — the first sign that something is happening.

what users feel

The screen stays blank/white for an uncomfortable beat. Users think the link is broken, hit reload, or navigate away.

business impact

Slow FCP makes the page feel frozen; users often abandon if nothing appears in the first second.

Common causes

  • Render-blocking stylesheets or scripts in <head>
  • Excessive third-party tags before content
  • Slow DNS / TLS handshake

What to do

  • Inline critical CSS, defer the rest
  • Add async/defer to non-essential <script> tags
  • Preconnect to critical origins (fonts, APIs)
how Pulse measures it

Captured from the 'paint' performance entry named 'first-contentful-paint', recorded once when the browser renders any content for the first time.

TTFB

Time to First Byte

How quickly the server responds

good ≤ 800 msneeds ≤ 1.80 spoor > 1.80 s

The time between the browser requesting the page and the first byte of the response arriving.

what users feel

Every click that loads a new page feels heavy and 'stuck' before anything starts drawing — even on fast networks.

business impact

TTFB is the floor of every other metric. A slow server means LCP, FCP, and INP can never be fast.

Common causes

  • Slow database queries or unoptimised backend logic
  • No caching / cold serverless functions
  • Geographic distance from origin (no CDN/edge)
  • Heavy redirects

What to do

  • Cache HTML at the edge (CDN, Cache-Control)
  • Optimise slow database queries; add indexes
  • Pre-warm or move workloads to edge runtimes
  • Eliminate redirect chains
how Pulse measures it

Derived from the 'navigation' PerformanceEntry: responseStart − requestStart. This excludes redirect time; redirects show up separately in the network tab.

INP

Interaction to Next Paint

How responsive the page feels to clicks/taps

good ≤ 200 msneeds ≤ 500 mspoor > 500 ms

The longest delay between a user interaction (tap, click, keypress) and the next visual update.

what users feel

Buttons feel dead, dropdowns hesitate, typing lags behind keystrokes. Users double-click, triggering duplicate actions and errors.

business impact

High INP feels like the page is stuck after you click — buttons don't respond, inputs lag.

Common causes

  • Long JavaScript tasks blocking the main thread
  • Heavy event handlers (especially on click/input)
  • Synchronous layout thrashing
  • Large React renders triggered by interaction

What to do

  • Break long tasks with setTimeout / scheduler.yield()
  • Debounce/throttle expensive handlers
  • Memoise React components and avoid unnecessary re-renders
  • Move work off the main thread with Web Workers
how Pulse measures it

Aggregated from PerformanceObserver entries of type 'event' with duration ≥ 40ms. We report the worst interaction observed during the session window.

CLS

Cumulative Layout Shift

How much the layout jumps around

good ≤ 0.10needs ≤ 0.25poor > 0.25

A unitless score (0–1+) measuring how much visible elements move unexpectedly during page load.

what users feel

A button moves just as the user taps it — they confirm a purchase, dismiss a modal, or click an ad by mistake. The page feels janky and amateur.

business impact

Layout shifts cause mis-clicks (buying the wrong thing, dismissing the wrong dialog) and feel unpolished.

Common causes

  • Images and ads without width/height attributes
  • Web fonts swapping in (FOIT/FOUT)
  • Content injected above existing content (banners, embeds)
  • Animations using top/left instead of transform

What to do

  • Always set width/height (or aspect-ratio) on <img>, <video>, <iframe>
  • Reserve space for ads, embeds, and dynamic banners
  • Use font-display: optional or preload critical fonts
  • Animate with transform/opacity, not layout properties
how Pulse measures it

Sums 'layout-shift' PerformanceObserver entries where hadRecentInput is false, grouped into windows of ≤1s separated by ≤5s gaps. We report the largest window's value.

API

API request duration

How long backend calls take

good ≤ 300 msneeds ≤ 1.00 spoor > 1.00 s

Round-trip time for fetch/XHR calls made from the page to your backend or third-party APIs.

what users feel

Spinners linger after clicks, forms hang on submit, dashboards take seconds to populate. Users abandon flows mid-task.

business impact

Slow APIs delay UI updates after interaction and inflate INP. Long pending requests block user flows like checkout.

Common causes

  • Unindexed database queries / N+1 queries
  • Cold serverless functions
  • Large response payloads (no pagination/compression)
  • Slow third-party dependencies

What to do

  • Add database indexes; eliminate N+1 with joins or batching
  • Paginate, project only needed fields, enable gzip/brotli
  • Cache responses at the edge or in-memory
  • Set timeouts on third-party calls and degrade gracefully
how Pulse measures it

Captured by patching window.fetch and XMLHttpRequest. We record duration, status, size, and origin — same-origin and third-party hosts both surface here.

methodology

How Pulse measures performance

The numbers in your dashboard come from real visitors, not synthetic tests.

Pulse is a Real-User Monitoring (RUM) tool. The SDK runs in your visitors' browsers and reports actual experiences — not synthetic lab tests.

  • Each page load creates a session keyed to the visitor's tab. Reloads or navigations to a new origin start a new session.
  • Web Vitals are collected via the standard PerformanceObserver API as defined by web.dev/vitals.
  • Network and resource timings are gathered by patching fetch/XHR and reading PerformanceResourceTiming entries.
  • Errors come from window.onerror, unhandledrejection, and console.error.
  • Events are batched and flushed every 5 seconds (or on page hide/unload via sendBeacon) to minimise overhead.
  • Sampling: 100% of sessions are captured by default. The dashboard caps recent events at 2000 per session for snappy queries; older events remain in the database.
  • Thresholds (good / needs improvement / poor) follow Google's official web.dev/vitals guidance.

For the official spec, see web.dev/vitals.

beyond core vitals

Other signals on your dashboard

Pulse also captures these per-session signals — they don't have a single Google grade, but they're how you actually root-cause regressions.

Long tasks

Any main-thread task > 50ms that risks blocking input. Surfaced in the Rendering tab and overlaid on the timeline.

WebSocket frames

Per-channel chatty (msgs/s), heavy (avg bytes), blocking (handler ms) and bursty (gap p95) metrics, plus the biggest contributing frame.

Hardware Impact

Device class buckets (≤2GB / 4GB / 8GB+, low/mid/high CPU) so a regression on weak hardware doesn't get masked by averages.

Root-cause hypotheses

For anomalies, Pulse proposes likely causes (deploys, device class, network, geography) and lets you compare ranges and export evidence.

Tip: in the dashboard, every alert and metric card links back here.