home
// introduction

Welcome to Pulse

Pulse is a real-user monitoring (RUM) platform that measures how your website actually performs in the wild — across real browsers, real networks, and real devices. This page walks you through what Pulse measures, how it captures data, and how to onboard in a few minutes.

1. The Pulse solution

Synthetic tools (like Lighthouse) score your site from a controlled lab. Pulse takes the opposite approach: it sits with real users and reports what they experience. You get three layers of insight from one install:

  • Web Vitals

    LCP, FCP, TTFB, INP, CLS — Google's user-experience standard.

  • Network & resources

    API + WebSocket frames, asset loads, errors, long tasks, and replayable timelines per session.

  • Hardware impact

    CPU cores, memory, GPU, and network class — to separate code issues from device limits.

2. What we measure

Page experience

  • Largest Contentful Paint (LCP)
  • First Contentful Paint (FCP)
  • Time to First Byte (TTFB)
  • Interaction to Next Paint (INP)
  • Cumulative Layout Shift (CLS)

Session context

  • Browser, OS, viewport, country
  • Device CPU cores, memory, GPU renderer (hashed)
  • Network type & effective bandwidth
  • API call timings & failure rate (with problem-API hotspots)
  • WebSocket channels — chatty / heavy / blocking / bursty stats per channel
  • Long tasks & main-thread blocking (rendering tab)
  • Resource waterfall (scripts, images, fonts)

For full definitions and thresholds, see the Glossary.

3. Measurement approach

Real User Monitoring (RUM)

Pulse uses the browser's native PerformanceObserver and the official web-vitals library to capture metrics during actual page loads — no fake browsers, no synthetic timing.

Two collection paths

  • SDK — drop a small script (/sdk.js) on your site to measure all visitors.
  • Chrome Extension — install Pulse locally to measure any site you browse, ideal for QA & competitive benchmarking.

Sampling & privacy

Hardware specs are sampled at a configurable rate to reduce payload size. GPU renderer strings are hashed before storage, and we never collect PII.

4. Onboarding in 4 steps

  1. Step 1 · Create an account

    Sign in to provision your workspace and your first project API key.

  2. Step 2 · Pick a collection method

    Add the SDK snippet to your site, or install the Pulse Chrome Extension to measure any URL locally.

  3. Step 3 · Generate a session

    Visit your site (or any page with the extension active). Pulse will capture Web Vitals, network calls, and device specs automatically and stream them to your dashboard within seconds.

  4. Step 4 · Open your dashboard

    Head to the Dashboard to review per-session waterfalls, the Hardware Impact panel, and trend charts.

5. Reading your dashboard

Project overview

P75 vitals across all sessions, trended over time and split by browser/OS.

Session detail

Per-visit replayable timeline with tabs for APIs, Websockets, Resources, Trends, Device, Network, Rendering, Errors and Live.

APIs tab

TimelineReplay on top, Problem APIs table below. Click any row to scrub the timeline cursor to that request and see surrounding ±2.5s context (long tasks, vitals, related WS frames).

Websockets tab

Same replay pattern for WS channels with summary cards (chatty / heavy / blocking / bursty), the biggest contributing frame, and event-type filters in the context panel.

Session compare

Pin two sessions side by side to diff vitals, INPs and long tasks — useful for before/after deploys or device-class regressions.

Root Cause

Project-wide correlations between regressions and likely factors (deploys, device class, network, geography). Compare ranges and export evidence as PDF or CSV.

Hardware Impact panel

Coverage diagnostics, per-field collection errors, missing-signal warnings, and remediation tips so device-class buckets stay accurate.

Glossary

Need a refresher? Every metric is defined in the Glossary.

6. What's new

  • Replayable session timeline inside both the APIs and Websockets tabs — click an offender, the cursor jumps and auto-scrolls.
  • Problem APIs / Problem Channels hotspot tables surface the worst latency, error rate, blocking time and burstiness for the session.
  • Selection context panel shows a ±2.5s window around any pick: WS frames (with direction/handler ms), long tasks, web vitals and related HTTP — filterable by event type.
  • WS summary cards quantify chatty / heavy / blocking / bursty per channel and pin the biggest contributing frame across the whole session.
  • Root Cause workspace with range comparison, factor correlations, evidence drill-down and PDF / CSV export.
  • Setup share links from Quick Start — generate a tokenized status URL or download a PNG of your onboarding state to send to teammates.