Skip to content

✍️ 필사 모드: The Science of Modern Web Performance — A Deep Dive into Core Web Vitals, INP, LCP, CLS, RUM, Lighthouse, Critical Rendering Path, and Speculation Rules (2025)

English
0%
정확도 0%
💡 왼쪽 원문을 읽으면서 오른쪽에 따라 써보세요. Tab 키로 힌트를 받을 수 있습니다.

TL;DR — 2024 was a tectonic shift for web performance. INP (Interaction to Next Paint) replaced FID, reshaping the three Core Web Vitals into LCP, CLS, and INP, while Speculation Rules API (Chrome 121), Partial Prerendering (Next.js 14), HTTP/3 QUIC (past 30% of all traffic), and Early Hints (103 Status) all standardized and went mainstream in the same year. To answer "why is my site slow," this post covers the Critical Rendering Path from first principles, breaking up Long Tasks, RUM vs Lab Data, why Lighthouse is often wrong, Islands and Resumability (Qwik), Image/Font optimization (AVIF, font-display: optional), and the 2025 performance tooling stack (Vercel Analytics, SpeedCurve, Perfetto) — a full landscape of modern web performance.

Why Web Performance Became a Hot Topic Again

Web performance became mainstream in the early 2010s with YSlow (Yahoo) and PageSpeed Insights (Google), but then faded into "infra team checklist" territory for years. Then in 2020, Google officially announced Core Web Vitals as a search ranking signal, and in 2024 INP replaced FID — performance now directly drives search rank, ad conversion, and bounce rate as a business metric.

By the numbers:

  • Amazon: Every 100ms of page load delay costs 1% in revenue (2006 data, more sensitive in 2024)
  • Walmart: 1-second faster LCP → 2% conversion lift
  • BBC: 10% more bounces per second of delay
  • Vodafone: 31% LCP improvement → 8% sales conversion lift (2021 case study)

According to the 2024 Chrome UX Report (CrUX, real-user data), only 42% of the world's top 1M sites pass all three Core Web Vitals. Sites built on SPA frameworks like React/Vue/Angular pass at only 28%, far below static sites (65%). Performance is also the price of your framework choice.

This post walks through the definitions and measurement of the three Core Web Vitals, why the same code is fast in Lab and slow in Field, and how to track and fix Long Tasks and Layout Shifts — from first principles to practice. Performance doesn't come from "a single trick" but from understanding the entire rendering pipeline.

Browser Rendering Pipeline — The Journey to a Pixel

To discuss web performance, you need to understand the Critical Rendering Path — how the browser takes HTML and draws pixels. Every place time leaks along this path is a performance bug.

1. NavigationURL entered / link clicked
2. DNS Lookup           — example.com93.184.216.34
3. TCP + TLS Handshake3-way + SSL (HTTP/1.1: ~300ms, HTTP/3: ~100ms)
4. HTTP RequestGET /
5. TTFBTime To First Byte (server response)
6. HTML ParsingBuild DOM tree (fetches external resources during parsing)
7. CSSOM ConstructionParse CSS, build CSSOM tree
8. Render TreeDOM + CSSOM → only nodes that render
9. Layout (Reflow)Calculate position/size of each node
10. PaintGenerate pixel info (per layer)
11. CompositeGPU composites layers
12. DisplayFirst pixels the user sees (FCP)

Key bottlenecks at each step:

  1. DNS + TCP + TLS — 3-4 RTTs (Round Trip Time) before the first byte. This is what HTTP/3 QUIC (0-RTT resumption) attacks.
  2. TTFB — Server response time. For SSR: DB + rendering; for static files: whether CDN cache hits.
  3. HTML Parsing Blocking<script> tags block parsing by default. Solved by async/defer.
  4. CSSOM Blocking — CSS is a render-blocking resource. The Render Tree is not built until CSS finishes loading.
  5. LayoutForced synchronous layout (e.g. reading offsetHeight) is a performance killer.
  6. Paint / Composite — Use will-change: transform and contain: layout to trigger GPU layer separation.

With JS frameworks like React/Vue, add JS download, parsing, execution, and Hydration on top. These extra steps are the root reason SPAs struggle with Core Web Vitals.

The Three Core Web Vitals (2024–2025)

Google defined Core Web Vitals in 2020 as the three pillars of user experience:

  • Loading — LCP
  • Interactivity — FID → INP (replaced in March 2024)
  • Visual Stability — CLS

LCP — Largest Contentful Paint (Loading)

Definition: The time when the largest content element visible in the viewport (image, video poster, block of text) is drawn.

Thresholds: <2.5s = Good, 2.5–4.0s = Needs Improvement, >4.0s = Poor.

Measurement: LargestContentfulPaint PerformanceObserver API.

new PerformanceObserver((list) => {
  for (const entry of list.getEntries()) {
    console.log('LCP element:', entry.element)
    console.log('LCP time:', entry.startTime)
    console.log('LCP render time:', entry.renderTime)
    console.log('LCP size:', entry.size)
  }
}).observe({ type: 'largest-contentful-paint', buffered: true })

The four main culprits for slow LCP:

  1. Slow TTFB — If the server takes 1 second, passing <2.5s LCP is impossible.
  2. Render-blocking resources — A delayed <link rel="stylesheet"> delays LCP directly.
  3. Resource load time — The LCP image must not be lazy-loaded. Use fetchpriority="high".
  4. Client-side rendering — With React, the LCP element doesn't hit the DOM until JS downloads and Hydration completes.

LCP optimization checklist:

  • LCP image gets fetchpriority="high" + loading="eager" + decoding="async"
  • <link rel="preload" as="image" href="/hero.webp" imagesrcset="..." fetchpriority="high">
  • Above-the-fold content uses inline CSS
  • Fonts with font-display: optional or swap
  • CDN + HTTP/3 + Brotli compression
  • SSR or SSG (avoid CSR)

CLS — Cumulative Layout Shift (Visual Stability)

Definition: How much the layout unexpectedly shifts during page load. Accumulated "impact fraction" times "distance fraction" of the moved element.

Thresholds: <0.1 = Good, 0.1–0.25 = Needs Improvement, >0.25 = Poor.

CLS formula: impact fraction × distance fraction

  • Impact Fraction: Ratio of moved element's area to the viewport
  • Distance Fraction: Distance moved / viewport size

The five main CLS culprits:

  1. Unsized images — Missing width/height on <img> → layout reflows after the image loads.
  2. Unsized ads/embeds — AdSense, YouTube embed, iframe.
  3. FOIT/FOUT — Text height changes when fonts swap.
  4. Dynamic content injection — Banners/alerts inserted at the top.
  5. Web font load delayfont-display: swap itself causes CLS (ironically).

CLS optimization:

<!-- Specify image size -->
<img src="/hero.jpg" width="1200" height="630" alt="..." />

<!-- Reserve space with CSS aspect-ratio -->
<style>
  .embed { aspect-ratio: 16 / 9; }
</style>

<!-- Adjust font fallback size -->
<style>
  @font-face {
    font-family: 'Inter';
    src: url('inter.woff2') format('woff2');
    font-display: optional;  /* Prevent CLS, keep fallback if font doesn't load */
    size-adjust: 107%;       /* Match fallback size */
  }
</style>

<!-- Skeleton / Placeholder -->
<div class="skeleton" style="min-height: 400px;">Loading...</div>

INP — Interaction to Next Paint (Officialized March 2024)

Definition: The time from when a user clicks/taps/types until the next frame is drawn — measured as the worst (slowest) interaction over the page's entire lifetime (close to the 98th percentile).

Thresholds: <200ms = Good, 200–500ms = Needs Improvement, >500ms = Poor.

Why INP replaced FID:

  • FID (First Input Delay) measures only the "start delay" of the first input (input → handler start).
  • But real UX problems are the total time from input → screen update. Long JS tasks, re-renders, Layout, and Paint all need to count.
  • FID passed <100ms on most sites → no discriminating power. INP is much stricter.

INP formula (simplified):

INP = max(interactions) where interaction_time = 
  (input delay) + (processing time) + (presentation delay)

Measuring INP:

import { onINP } from 'web-vitals'

onINP((metric) => {
  console.log('INP:', metric.value, 'ms')
  console.log('Attribution:', metric.attribution)
  // attribution: { interactionType, eventTarget, loafTime, ... }
})

Seven main culprits for bad INP:

  1. Long Task — JS that blocks the main thread for >50ms.
  2. Large React component re-renders — A state update re-renders the whole subtree.
  3. Synchronous network requests — Awaiting fetch in an event handler.
  4. Large DOM — Layout recalculation across thousands of nodes.
  5. Heavy CSS selectors:has(), complex nth-child.
  6. Synchronous third-party scripts — Ads, analytics.
  7. ResizeObserver/MutationObserver storms — Callbacks triggering synchronous Layout.

Optimizing INP — breaking up Long Tasks:

// Bad — process 1000 items at once (800ms Long Task)
function processItems(items) {
  items.forEach(item => expensiveWork(item))
}

// Good — scheduler.yield() (standardized in 2024)
async function processItems(items) {
  for (const item of items) {
    expensiveWork(item)
    await scheduler.yield()  // Yield to main thread
  }
}

// Alternative — yield via setTimeout (for older browsers)
function processItemsYield(items, i = 0) {
  const deadline = performance.now() + 10
  while (i < items.length && performance.now() < deadline) {
    expensiveWork(items[i++])
  }
  if (i < items.length) {
    setTimeout(() => processItemsYield(items, i), 0)
  }
}

React-specific INP — useTransition:

import { useTransition, useState } from 'react'

function SearchBox() {
  const [isPending, startTransition] = useTransition()
  const [query, setQuery] = useState('')
  const [results, setResults] = useState([])

  function handleChange(e) {
    setQuery(e.target.value)  // Urgent update (input value)
    startTransition(() => {
      setResults(expensiveSearch(e.target.value))  // Lower priority
    })
  }
  // ...
}

RUM vs Lab Data — Why Lighthouse Is Often Wrong

There are two broad kinds of web performance data:

Lab Data (Synthetic Monitoring)

  • Lighthouse, WebPageTest, PageSpeed Insights (Lab tab)
  • Single measurement in a controlled environment
  • Pros: Reproducible, easy regression detection, CI/CD integration
  • Cons: Differs from real user networks, devices, and interactions

Field Data / RUM (Real User Monitoring)

  • Chrome UX Report (CrUX), Vercel Analytics, Sentry Performance, New Relic Browser, SpeedCurve
  • Collected from real user browsers via Performance API
  • Pros: Reflects real UX, captures device/network distribution
  • Cons: Noisy, hard to debug (you need to trace which interaction caused INP 80ms)

Why Lighthouse scores differ from real scores:

  1. Lighthouse is fixed to Moto G Power + 4G simulation. Your real user might be on iPhone 15 + 5G.
  2. Lighthouse only measures page load → INP is based on session-wide interaction → not measurable in Lab.
  3. Lighthouse is a single run → real data is a distribution. Google uses CrUX's 75th percentile.
  4. Lighthouse has no cookies/login → real authenticated pages look different.
  5. Lighthouse has a fixed 360×640 viewport → differs from real device width distribution.

Recommended strategy:

  • Lab Data (Lighthouse): PR-level regression testing (Lighthouse CI), set upper bounds
  • RUM: Production monitoring, track p75/p95, drill down by country/device
  • When the two disagree, trust RUM

The 2025 RUM Stack

ProviderNotesPricing (ref)
Vercel Speed InsightsNext.js integration, Core Web Vitals + Custom Events10,000 events free
Google CrUXMonthly public data, BigQueryFree
Sentry PerformanceErrors + perf integrationFrom $26/mo
SpeedCurveCompetitor comparison, custom dashboardsFrom $149/mo
New Relic BrowserAPM integrationFree tier
Cloudflare Web AnalyticsServerless, privacy-firstFree
Pingdom RUMStrong geographic coverageFrom $14.95/mo

Resource Priority and Loading Strategies

Browsers don't fetch 100 resources simultaneously. Fetch Priority, Preload Scanner, HTTP/2 Priority, and HTTP/3 Priority interleave to decide.

Resource Hints — Telling the Browser in Advance

<!-- Pre-resolve DNS -->
<link rel="dns-prefetch" href="https://api.example.com" />

<!-- Pre-open connection (DNS + TCP + TLS) -->
<link rel="preconnect" href="https://api.example.com" crossorigin />

<!-- Pre-download resource (for current page) -->
<link rel="preload" href="/hero.webp" as="image" fetchpriority="high" />
<link rel="preload" href="/main.js" as="script" />
<link rel="preload" href="/inter.woff2" as="font" type="font/woff2" crossorigin />

<!-- Prefetch next page (low priority) -->
<link rel="prefetch" href="/next-page.html" />

<!-- Pre-render full page (superseded by Speculation Rules) -->
<link rel="prerender" href="/next-page.html" />  <!-- Deprecated -->

Fetch Priority API (Chrome 101+, Safari 17+, Firefox 132+)

<!-- LCP image -->
<img src="/hero.webp" fetchpriority="high" />

<!-- Below-the-fold image -->
<img src="/below-fold.jpg" fetchpriority="low" loading="lazy" />

<!-- fetch() API -->
<script>
  fetch('/critical.json', { priority: 'high' })
  fetch('/analytics.json', { priority: 'low' })
</script>

Speculation Rules API — Standardized in 2024

Going beyond the limits of <link rel="prefetch"> and prerender, this uses CSS selector–based rules to prerender links the user is likely to visit.

<script type="speculationrules">
{
  "prerender": [{
    "urls": ["/product/1", "/product/2"],
    "eagerness": "moderate"
  }],
  "prefetch": [{
    "where": { "href_matches": "/product/*" },
    "eagerness": "conservative"
  }]
}
</script>

Eagerness levels:

  • immediate — Right now (aggressive)
  • eager — As soon as the hint is found
  • moderate — On link hover/touch
  • conservative — Right before click

Chrome 121+ can effectively deliver LCP of 0ms (instant display when navigating to a prerendered page).

Early Hints (HTTP 103)

A technique where the server sends a 103 Early Hints status with Link: </main.css>; rel=preload hints before the final 200 response.

HTTP/1.1 103 Early Hints
Link: </main.css>; rel=preload; as=style
Link: </hero.webp>; rel=preload; as=image

HTTP/1.1 200 OK
Content-Type: text/html
...

Supported by Cloudflare, Fastly, and Next.js (14.1+). Critical resources start fetching without waiting on TTFB → LCP drops by 200–400ms.

Image Optimization — 50% of All Web Bandwidth

Per the Chrome UX Report, images are 48% of total bytes on the average web page. Image optimization alone can cut LCP by more than 1 second.

Format Selection

FormatSupportNotesCompression
JPEG100%Photos, lossyBaseline
PNG100%Transparency, losslessLarge
WebP97% (excl. IE)Google, 25–35% smaller25–35% smaller than JPEG
AVIF93%AV1 codec, 50% smaller50% smaller than JPEG
JPEG XLSafari only (experimental)Future candidateSimilar to AVIF

2025 recommendation: Use <picture> with AVIF → WebP → JPEG fallback.

<picture>
  <source srcset="/hero.avif" type="image/avif" />
  <source srcset="/hero.webp" type="image/webp" />
  <img src="/hero.jpg" width="1200" height="630" alt="..." loading="lazy" decoding="async" />
</picture>

Responsive Images — srcset + sizes

<img 
  src="/hero-800.jpg"
  srcset="/hero-400.jpg 400w,
          /hero-800.jpg 800w,
          /hero-1600.jpg 1600w,
          /hero-2400.jpg 2400w"
  sizes="(max-width: 768px) 100vw, 
         (max-width: 1200px) 50vw, 
         33vw"
  width="800" height="600"
  alt="Hero"
/>

Modern CDNs — Automatic Format Conversion

  • Cloudinary — URL-based transforms (w_800,f_auto,q_auto)
  • imgix — Dynamic parameters
  • Cloudflare Images — $5/mo for 100k images
  • Next.js Image<Image /> component (auto AVIF/WebP)
  • Vercel Image Optimization — Build time + on-demand

Lazy Loading

<!-- Native lazy loading (Chrome 77+) -->
<img src="/hero.jpg" loading="lazy" />

<!-- Custom via IntersectionObserver -->
<script>
  const images = document.querySelectorAll('img[data-src]')
  const observer = new IntersectionObserver((entries) => {
    entries.forEach(entry => {
      if (entry.isIntersecting) {
        entry.target.src = entry.target.dataset.src
        observer.unobserve(entry.target)
      }
    })
  }, { rootMargin: '200px' })  // Preload 200px before
  images.forEach(img => observer.observe(img))
</script>

Warning: Never lazy-load the LCP image. Use loading="eager" + fetchpriority="high".

Font Optimization — The Main Offender Behind FOIT/FOUT and CLS

Until a web font downloads, text either isn't shown (FOIT) or shows a fallback that suddenly swaps (FOUT) — both hurt UX.

font-display Strategies

@font-face {
  font-family: 'Inter';
  src: url('inter.woff2') format('woff2');
  font-display: swap;      /* FOUT: show fallback then swap — causes CLS */
  font-display: optional;  /* Keep fallback if font doesn't arrive in 100ms — CLS 0 */
  font-display: block;     /* Wait up to 3s — FOIT */
  font-display: fallback;  /* 100ms + 3s */
}

Recommendation: Use font-display: optional + size-adjust for LCP text to match the fallback size.

Matching Fallback Size (size-adjust)

@font-face {
  font-family: 'Inter';
  src: url('inter.woff2') format('woff2');
  font-display: optional;
}

@font-face {
  font-family: 'Inter-fallback';
  src: local('Arial');
  size-adjust: 107.4%;   /* Scale Arial to Inter's size */
  ascent-override: 90%;
  descent-override: 22%;
  line-gap-override: 0%;
}

body {
  font-family: 'Inter', 'Inter-fallback', sans-serif;
}

Tools: Font Style Matcher, Fontaine (Vite plugin).

Preload + Subset

<!-- Preload critical fonts -->
<link rel="preload" href="/inter-latin.woff2" as="font" type="font/woff2" crossorigin />

Subsetting: Korean fonts (Noto Sans KR, Pretendard) are 3–10MB with full glyphs. Use unicode-range to load only the Korean region.

@font-face {
  font-family: 'Pretendard';
  src: url('Pretendard-KR.woff2') format('woff2');
  unicode-range: U+AC00-D7A3, U+1100-11FF, U+3130-318F;  /* Hangul only */
}

JavaScript Loading Strategies

JS is the single biggest foe and friend of web performance. Modern SPAs ship around 400KB gzipped JS on average — just parse/compile alone takes >800ms on mobile.

async vs defer vs module

<!-- Blocking (never use) -->
<script src="/main.js"></script>

<!-- Async: execute as soon as downloaded, no order guarantee -->
<script src="/analytics.js" async></script>

<!-- Defer: download in parallel, execute after HTML parsing, order preserved -->
<script src="/main.js" defer></script>

<!-- Module: defer by default, order preserved -->
<script src="/app.js" type="module"></script>

Code Splitting

Supported by Webpack/Rollup/esbuild. Split JS per route/component to reduce initial load.

// React.lazy
const Heavy = React.lazy(() => import('./HeavyComponent'))

function App() {
  return (
    <Suspense fallback={<Skeleton />}>
      <Heavy />
    </Suspense>
  )
}

// Next.js dynamic
import dynamic from 'next/dynamic'
const Chart = dynamic(() => import('./Chart'), { ssr: false })

Tree Shaking

Analyze ESM imports to remove unused code. Declaring side-effect free is essential.

// package.json
{
  "sideEffects": false,
  "exports": {
    ".": "./dist/index.js"
  }
}

Third-Party Scripts — The Biggest Offender

A single 3rd-party script like Google Analytics, Facebook Pixel, or Intercom can push INP to 500ms.

Partytown (Builder.io) — Run 3rd-party scripts inside a Web Worker.

<script src="https://cdn.jsdelivr.net/npm/@builder.io/partytown/lib/partytown.js"></script>
<script type="text/partytown" src="https://www.googletagmanager.com/gtag/js?id=GA_ID"></script>

Next.js Script component:

import Script from 'next/script'

<Script src="https://analytics.example.com" strategy="lazyOnload" />
<Script src="https://critical.example.com" strategy="beforeInteractive" />

Long Tasks and the Main Thread Budget

Long Task: A JS task that blocks the main thread for more than 50ms. The #1 killer of INP.

Detecting Long Tasks

new PerformanceObserver((list) => {
  for (const entry of list.getEntries()) {
    console.warn('Long Task:', entry.duration, 'ms', entry.attribution)
  }
}).observe({ type: 'longtask', buffered: true })

Long Animation Frames (LoAF) — New in 2024

A new API that addresses Long Task's limits. Breaks down rendering + scripts + Layout + Paint per frame.

new PerformanceObserver((list) => {
  for (const entry of list.getEntries()) {
    console.log('LoAF:', {
      duration: entry.duration,
      scripts: entry.scripts,    // which scripts took how long
      blockingDuration: entry.blockingDuration,
    })
  }
}).observe({ type: 'long-animation-frame', buffered: true })

scheduler.yield() — Yielding to the Main Thread

async function bulkWork(items) {
  for (const item of items) {
    process(item)
    if (navigator.scheduling?.isInputPending()) {
      await scheduler.yield()  // Yield immediately if input is pending
    }
  }
}

Web Worker — CPU Offloading

// main.js
const worker = new Worker('/worker.js')
worker.postMessage({ cmd: 'hash', data: largeString })
worker.onmessage = (e) => console.log('hashed:', e.data)

// worker.js
self.onmessage = async (e) => {
  const buf = new TextEncoder().encode(e.data.data)
  const hash = await crypto.subtle.digest('SHA-256', buf)
  self.postMessage(Array.from(new Uint8Array(hash)))
}

The Hydration Problem — The Fundamental Cost of SPAs

SPAs like React/Vue/Angular have a Hydration step — attaching JS to server-rendered HTML to make it interactive — which wrecks INP.

Hydration's Six-Step Cost (Addy Osmani)

  1. JS download — 200–500KB gzipped
  2. JS parse + compile
  3. React tree reconstruction (independent of server HTML)
  4. Attach event listeners
  5. Run useState/useEffect
  6. Commit

On mobile this whole chain takes 1–3 seconds. User clicks during that window are ignored.

Solution 1: Partial Hydration (Islands)

The approach of Astro, Marko, and Fresh (Deno). Most of the page is static HTML; only the interactive parts are hydrated as islands.

---
// Astro file
import Counter from './Counter.tsx'
---
<html>
  <body>
    <h1>Static content (not hydrated)</h1>
    <Counter client:visible />  <!-- Hydrate on viewport entry -->
    <Counter client:idle />     <!-- When idle -->
    <Counter client:load />     <!-- Immediately -->
  </body>
</html>

Solution 2: Resumability (Qwik)

Qwik's innovation: eliminate Hydration entirely and resume from state serialized into HTML at the moment of user interaction.

// Qwik component
export default component$(() => {
  const count = useSignal(0)
  return (
    <button onClick$={() => count.value++}>
      {count.value}
    </button>
  )
})

The handler URL is embedded in HTML:

<button on:click="app.js#Counter_onClick[0]">0</button>

Start with 0KB of JS, lazy-load just the handler JS on click. Makes TTI = LCP reality.

Solution 3: React Server Components + Streaming

React 18 + Next.js 14. Server Components aren't in the JS bundle → smaller client bundle. Streaming renders up to <Suspense> boundaries first → faster LCP.

Solution 4: Selective Hydration

A React 18 built-in. Hydrates <Suspense> boundaries based on priority. The area the user clicks gets processed first.

HTTP/3, QUIC, and the Network Layer

HTTP/3 passed 30% of all traffic in 2024 (W3Techs). Unlike HTTP/2 which runs on TCP, HTTP/3 runs on UDP-based QUIC.

HTTP/1.1 → HTTP/2 → HTTP/3

VersionTransportMultiplexingHead-of-Line Blocking0-RTT
HTTP/1.1TCPNo (6 connections/origin)YesNo
HTTP/2TCPYesAt TCP level YesNo
HTTP/3UDP (QUIC)YesNoYes

HTTP/3 core wins:

  1. 0-RTT Resumption — Reuse prior connection keys, send data on the first request
  2. Connection Migration — Connection survives WiFi → cellular switch (Connection ID)
  3. No HOL Blocking — A lost packet in one stream doesn't block others
  4. Mandatory encryption — TLS 1.3 built in, no cleartext

Real measurements (Cloudflare 2024):

  • Google Search: 3% faster median with HTTP/3, 10% faster at the top 10%
  • Facebook: 20% less video rebuffering
  • Akamai: 12% better mobile TTFB

CDN + Edge Computing

Cloudflare, Fastly, AWS CloudFront, Vercel Edge Network, Bunny.net. Caching content close to users to minimize latency.

2025 trends:

  • Edge Workers — Cloudflare Workers, Deno Deploy, Vercel Edge Functions. V8 Isolate–based with sub-ms cold start.
  • Regional Edge Cache — Three tiers (Origin → Regional → Edge) instead of the classic two (Origin → Edge).
  • Smart Placement (Cloudflare) — Place workers close to Origin, not users, to minimize DB latency.

The 2025 Performance Tooling Stack

Measurement Tools

  • Chrome DevTools Performance — Most fundamental. The 2024 Performance Insights panel added real-time Core Web Vitals analysis.
  • Lighthouse — Built into Chrome. lighthouse-ci for CI, integrated with Vercel/Netlify.
  • WebPageTest — Deep analysis, filmstrip, connection details. Free + paid plans.
  • PageSpeed Insights — Lab (Lighthouse) + Field (CrUX) combined view.
  • Chrome UX Report — Public monthly, compare against competitors via BigQuery.

Profilers

  • SpeedScopehttps://www.speedscope.app, flame graph viz, imports Chrome Performance profiles.
  • Perfetto — Chrome DevTools and Chromium-internal tracing, shareable UI.
  • React DevTools Profiler — Component render-time breakdown.
  • Next.js Build Analyzer@next/bundle-analyzer, bundle-size visualization.

RUM

  • Vercel Speed Insights + Web Analytics — Default for Next.js.
  • Sentry Performance — Error + RUM integration.
  • New Relic Browser — APM integrated.
  • Cloudflare Web Analytics — Free, privacy-first.
  • SpeedCurve — Strong for competitor comparison.

Optimization Tools

  • Next.js Image + Vercel Image Optimization — Auto AVIF/WebP.
  • Sharp (Node.js) — Server-side image transforms.
  • Partytown — Isolate 3rd-party scripts in a Worker.
  • Fontaine (Vite plugin) — Auto-generate fallback fonts.
  • Critical (Addy Osmani) — Extract critical CSS.

Production Optimization Checklist (2025)

Ordering when optimizing a real site:

  1. Add RUM — Measure real metrics with Vercel Speed Insights or the web-vitals library
  2. CDN + HTTP/3 + Brotli — Network layer
  3. Server TTFB under 200ms — DB query optimization, SSR caching, Edge Functions
  4. LCP image optimization — AVIF + fetchpriority="high" + preload
  5. Inline critical CSS, defer the restmedia="print" hack or the Critical library
  6. Font loadingfont-display: optional + size-adjust fallback
  7. JS code splitting — Per-route + React.lazy
  8. Audit 3rd-party scripts — Partytown, next/script strategy="lazyOnload"
  9. Eliminate CLS — Image/iframe width/height, ad slot aspect-ratio, font fallback matching
  10. INP optimization — Break up Long Tasks (scheduler.yield), useTransition, Web Worker
  11. Speculation Rules — Prerender predictable next pages
  12. Regression prevention — Lighthouse CI, Performance Budget (webpack/rollup plugin)

10 Common Anti-Patterns

  1. loading="lazy" on the LCP image — Permanent LCP delay.
  2. Custom fonts without a Fonts strategy — 3s FOIT, blank screen.
  3. Full React Hydration + static site — Using Next.js CSR instead of Astro/Next SSG.
  4. Blocking 3rd-party scripts — Dropping GA/GTM straight into head.
  5. Client-side Markdown rendering — Should be pre-converted to HTML server-side.
  6. Monitoring only Lighthouse score — Blind to real UX without RUM.
  7. Layout thrashing — Reading/writing offsetHeight inside a for loop.
  8. Loading giant image originals — Using a 4K image for a 200px thumbnail.
  9. Awaiting a synchronous fetch in an event handler — Severely degrades INP.
  10. State updates during Hydration — Endless Hydration/re-render loops.

Next Post Preview — The New Wave of Databases — PostgreSQL, pgvector, HNSW, and DB Strategy in the AI Era

The final destination of web performance optimization is usually the database. No matter how good your CDN is, a slow DB query destroys TTFB. The biggest story in databases from 2023–2025 was PostgreSQL's conquest of vector DBs. The pgvector extension is threatening dedicated vector DBs like Pinecone, Weaviate, and Qdrant, ushering in the era of PostgreSQL as an all-purpose DB.

In the next post:

  • Why PostgreSQL is #1 again — Top of the 2024 StackOverflow developer survey
  • pgvector and HNSW index — The math and practice of vector search
  • pgvector vs Pinecone vs Weaviate vs Qdrant — Perf/feature/cost comparison
  • PostgreSQL 17 leaps — Logical Replication, Incremental Backup
  • Supabase, Neon, PlanetScale, CockroachDB — Cloud PostgreSQL ecosystem
  • JSON, JSONB, GIN index — Seamless NoSQL integration
  • MVCC principles — The elegance of optimistic concurrency
  • Citus, TimescaleDB, PostGIS — The extension ecosystem
  • PostgreSQL + AI — RAG pipelines in practice

We'll cover all of the above. In an era where "one DB for everything" has become reality, we'll look at the background and production design. Let's trace why the web performance journey extends into the data layer.

현재 단락 (1/490)

Web performance became mainstream in the early 2010s with **YSlow** (Yahoo) and **PageSpeed Insights...

작성 글자: 0원문 글자: 25,118작성 단락: 0/490