LoadForge LogoLoadForge

What Are Core Web Vitals and Why Do They Matter?

What Are Core Web Vitals and Why Do They Matter?

What Are Core Web Vitals?

Core Web Vitals are a set of three specific metrics defined by Google that measure real-world user experience on web pages. They focus on three fundamental aspects of the user experience: loading speed, interactivity, and visual stability. Since June 2021, Core Web Vitals have been part of Google's page experience ranking signals, meaning they directly influence where your pages appear in search results.

The metrics are not theoretical. They are collected from real Chrome users through the Chrome User Experience Report (CrUX), which means Google evaluates your site based on how actual visitors experience it — not how it performs in a controlled lab environment. This distinction matters enormously, because a page that scores well in Lighthouse on a developer's MacBook may perform terribly for users on mid-range phones over cellular connections.

Core Web Vitals represent Google's attempt to distill the complex, multidimensional concept of "page quality" into three measurable, actionable numbers. They are not the only metrics that matter, but they are the ones Google has chosen to formalize, measure at scale, and incorporate into its ranking algorithm.

The Three Core Web Vitals

Largest Contentful Paint (LCP)

Largest Contentful Paint measures loading performance — specifically, how long it takes for the largest visible content element to render on the screen. This is typically a hero image, a prominent heading, a video poster image, or a large block of text. LCP captures the moment the user perceives the page as "loaded" because the main content is visible.

Target: Under 2.5 seconds. Pages with an LCP between 2.5s and 4.0s need improvement. Above 4.0s is poor.

LCP ScoreRating
< 2.5sGood
2.5s - 4.0sNeeds Improvement
> 4.0sPoor

LCP is triggered by the rendering of one of these element types — whichever is largest within the viewport at the time of render:

  • <img> elements
  • <image> elements inside <svg>
  • <video> elements (the poster image)
  • Elements with a background image loaded via url()
  • Block-level elements containing text nodes

Common causes of poor LCP:

  • Slow server response time. If your server takes 2 seconds to respond (high TTFB), your LCP cannot possibly be under 2.5 seconds. Server performance is the foundation. For a deep dive into server response times and how they change under load, see our post on what is latency.
  • Render-blocking resources. CSS and synchronous JavaScript in the <head> block rendering until they are downloaded and parsed. Large stylesheets and scripts delay when the browser can start painting content.
  • Slow resource load times. If your hero image is a 3MB uncompressed PNG served from a distant origin server, it will take a long time to download and render. Image optimization and CDN delivery are critical.
  • Client-side rendering. Single-page applications that require JavaScript to execute before any content appears inherently have slower LCP than server-rendered pages because the browser must download, parse, and execute JavaScript before it can even begin rendering content.

Interaction to Next Paint (INP)

Interaction to Next Paint measures responsiveness — how quickly your page responds to user interactions. It replaced First Input Delay (FID) as a Core Web Vital in March 2024, and it is a significantly more comprehensive metric.

Where FID only measured the delay before the browser began processing the first interaction, INP measures the full latency of all interactions throughout the page's lifecycle — from the user's input (click, tap, or keypress) to the moment the browser paints the next frame reflecting that interaction. INP then reports the 75th percentile of all interaction latencies, meaning it represents the experience that 75% of interactions meet or exceed.

Target: Under 200 milliseconds. Between 200ms and 500ms needs improvement. Above 500ms is poor.

INP ScoreRating
< 200msGood
200ms - 500msNeeds Improvement
> 500msPoor

INP is a harder metric to optimize than FID was because it captures the entire interaction lifecycle, not just the initial delay. An interaction's total latency consists of three phases:

  1. Input delay — the time between the user's action and the start of event handler execution (often caused by the main thread being busy with other work).
  2. Processing time — the time spent executing event handlers.
  3. Presentation delay — the time between event handler completion and the browser painting the next frame.

Common causes of poor INP:

  • Long JavaScript tasks. Any JavaScript task that runs for more than 50ms on the main thread can block user interactions. Large bundle sizes, complex component re-renders, and synchronous data processing are frequent culprits.
  • Excessive DOM size. Large DOM trees (over 1,500 elements) make style recalculations and layout operations slower, increasing the presentation delay phase.
  • Third-party scripts. Analytics, advertising, chat widgets, and social media embeds often execute JavaScript on the main thread, competing with your application's event handlers for CPU time.
  • Synchronous layout operations. Reading layout properties (like offsetHeight or getBoundingClientRect) inside event handlers forces the browser to perform synchronous layout, creating layout thrashing that delays the paint.

Cumulative Layout Shift (CLS)

Cumulative Layout Shift measures visual stability — how much the page's content moves around unexpectedly as it loads. A layout shift occurs when a visible element changes its position from one rendered frame to the next without being triggered by a user interaction (like a click or tap).

CLS quantifies these shifts using a score based on the impact fraction (how much of the viewport was affected) multiplied by the distance fraction (how far the element moved). The final CLS score is the largest burst of layout shift scores within a session window. A "burst" is a group of layout shifts that occur within a 5-second window, with no more than 1 second between individual shifts.

Target: Under 0.1. Between 0.1 and 0.25 needs improvement. Above 0.25 is poor.

CLS ScoreRating
< 0.1Good
0.1 - 0.25Needs Improvement
> 0.25Poor

CLS captures one of the most frustrating user experiences on the web: you are about to tap a button, and at the last moment the content shifts, causing you to tap something else entirely. Or you are reading an article, and an ad loads above, pushing the text you were reading off screen. These moments erode trust and make a site feel unpolished, regardless of how fast it loads.

Common causes of poor CLS:

  • Images and videos without dimensions. If you do not specify width and height attributes (or use CSS aspect-ratio), the browser cannot reserve space for the element before it loads. When it finally renders, everything below shifts down.
  • Dynamically injected content. Banners, cookie consent bars, promotional overlays, and ad slots that insert content above the fold push existing content down. If space is not reserved for them in the initial layout, they cause layout shifts.
  • Web fonts causing FOIT/FOUT. When a web font loads and replaces a fallback font, the difference in glyph sizes can cause text elements to change dimensions, shifting surrounding content. Using font-display: swap combined with fallback fonts that closely match the web font dimensions minimizes this.
  • Late-loading third-party embeds. Social media widgets, map iframes, and ad units that load after the initial page render and inject content into the layout without reserved space.

Why Core Web Vitals Matter for SEO

Google has been explicit that Core Web Vitals are a ranking signal. In practice, their impact on rankings works as a tiebreaker rather than a dominant factor. High-quality, relevant content still outranks fast but thin content. However, when two pages offer similar content relevance and authority, the one with better Core Web Vitals gets the edge.

The ranking signal is based on field data — real user experiences collected through CrUX — not lab data from Lighthouse. This means that optimizing your Lighthouse score without improving real-world performance does not affect your rankings. Google evaluates your site based on what actual users experience on actual devices and networks.

Google Search Console provides a dedicated Core Web Vitals report that shows which of your URLs are rated Good, Need Improvement, or Poor based on CrUX data. URLs are grouped by similar behavior, so fixing one page often improves the assessment for a group of similar pages.

Beyond the direct ranking impact, Core Web Vitals serve as a proxy for overall user experience quality. Sites that score well on CWV tend to have lower bounce rates, higher engagement, and better conversion rates — all of which indirectly support SEO through improved user signals.

How to Measure Core Web Vitals

Measuring Core Web Vitals effectively requires understanding the distinction between lab data and field data, and knowing when to use each.

Lab Tools

Lab tools measure Core Web Vitals in a controlled, simulated environment — a specific device, network speed, and geographic location. They are repeatable and useful for debugging, but they do not reflect the diversity of real-world conditions.

ToolWhat It MeasuresBest For
LighthouseLCP, CLS (INP not fully supported in lab)Development-time auditing, CI pipeline checks
PageSpeed InsightsAll three CWV (lab + field data)Quick assessment combining lab and real-user data
Chrome DevTools (Performance panel)All three CWV via manual interactionDeep debugging of specific interactions and layout shifts

Field Data

Field data (also called Real User Monitoring, or RUM) collects Core Web Vitals from actual users visiting your site. This is what Google uses for ranking decisions and is the authoritative source for understanding your site's real performance.

SourceWhat It ProvidesBest For
Chrome User Experience Report (CrUX)Aggregated CWV data from Chrome usersUnderstanding Google's view of your site
Google Search ConsoleCWV status for your URLs with pass/fail groupingsIdentifying which pages need work
RUM providers (e.g., web-vitals JS library)Per-page, per-user CWV data you collect yourselfGranular analysis, A/B testing performance changes

Why Field Data Matters More

Lab tests run on powerful machines with stable connections. Your users browse on mid-range Android phones over 3G. A page that scores 95 in Lighthouse on your MacBook Pro may have a poor LCP for users in emerging markets on budget devices. Field data captures this reality. Always prioritize field data when assessing whether your Core Web Vitals are truly healthy.

The web-vitals JavaScript library (maintained by Google) makes it straightforward to collect field CWV data from your own site:

import { onLCP, onINP, onCLS } from "web-vitals";

onLCP(console.log);
onINP(console.log);
onCLS(console.log);

In production, you would send these measurements to an analytics endpoint instead of logging them, building your own dataset of real-user Core Web Vitals that you can segment by page, device type, geography, and connection speed.

How to Improve Each Metric

Improving LCP

LCP is fundamentally about getting your main content visible as fast as possible. The strategies break down into reducing server response time, eliminating delays in resource delivery, and removing render-blocking resources.

Optimize images. For most pages, the LCP element is an image. Serve images in modern formats like WebP or AVIF, which offer significantly better compression than JPEG or PNG. Use responsive images (srcset) to serve appropriately sized images for each device. Lazy load images below the fold with loading="lazy", but never lazy load the LCP image — it should load eagerly.

Preload the LCP resource. If your hero image or LCP element's resource is not discoverable from the initial HTML (e.g., it is referenced in CSS or loaded via JavaScript), add a <link rel="preload"> tag so the browser starts fetching it immediately:

<link rel="preload" as="image" href="/images/hero.webp" />

Reduce server response time (TTFB). Your LCP cannot be faster than your TTFB. Optimize backend performance, use a CDN to serve pages from the edge, and implement server-side caching for pages that do not change frequently.

Eliminate render-blocking resources. Defer non-critical CSS and JavaScript. Inline critical CSS for above-the-fold content. Use async or defer attributes on script tags that do not need to block rendering.

Use a CDN. Serve static assets — and ideally your HTML — from a CDN with points of presence close to your users. This reduces the network latency component of LCP.

Improving INP

INP optimization is about keeping the browser's main thread available to respond to user input quickly. Every millisecond the main thread spends on non-interactive work is a millisecond that user interactions must wait.

Break up long tasks. JavaScript tasks that run for more than 50ms block the main thread and prevent the browser from responding to interactions. Use setTimeout, requestAnimationFrame, or the scheduler.yield() API to break large operations into smaller chunks that yield control back to the browser between steps.

Use requestIdleCallback for non-urgent work. Operations that do not need to happen immediately — analytics tracking, prefetching, non-visible DOM updates — should be deferred to idle periods when the main thread is not busy:

requestIdleCallback(() => {
  // Non-urgent work here
  sendAnalyticsData();
});

Reduce JavaScript bundle size. Less JavaScript means less parsing, compilation, and execution time. Audit your bundles for unused code (tree shaking), split code by route so users only download what they need, and evaluate whether heavy libraries can be replaced with lighter alternatives.

Debounce input handlers. Event handlers that fire on every keystroke, scroll, or mouse move can saturate the main thread. Debounce or throttle these handlers so they fire at a reasonable interval rather than on every event.

Offload heavy computation to Web Workers. CPU-intensive operations like data transformation, image processing, or complex calculations can run in a Web Worker on a separate thread, leaving the main thread free to handle user interactions.

Improving CLS

CLS optimization is primarily about reserving space for content before it loads and avoiding unexpected layout changes.

Set explicit dimensions on images and video. Always include width and height attributes on <img> and <video> elements. Modern browsers use these to calculate the aspect ratio and reserve space before the resource loads:

<img src="product.webp" width="800" height="600" alt="Product photo" />

Alternatively, use the CSS aspect-ratio property:

img {
  aspect-ratio: 4 / 3;
  width: 100%;
  height: auto;
}

Reserve space for dynamic content. If your page injects banners, ads, or notification bars after initial render, use CSS to reserve the exact space they will occupy. A min-height on the container prevents content below from shifting when the dynamic element appears.

Avoid injecting content above the fold after load. Any element inserted above existing content pushes everything down. If you must add dynamic content (cookie banners, promotional bars), insert it at the very top of the viewport and push content down before the user begins reading, or overlay it on top of existing content without shifting the layout.

Use font-display: swap with matched fallbacks. Web fonts that block rendering (font-display: block) delay text visibility. Using font-display: swap shows a fallback font immediately and swaps to the web font when it loads. To minimize the layout shift from the swap, choose a fallback font with similar metrics (x-height, character width) to your web font. Tools like fontaine or the CSS size-adjust descriptor can help match fallback fonts more precisely.

Animate with transform instead of layout properties. CSS animations that change width, height, top, left, margin, or padding trigger layout recalculations and cause layout shifts. Animations using transform (translate, scale, rotate) and opacity run on the compositor thread and do not affect layout.

Core Web Vitals Under Load

Here is where Core Web Vitals and server performance intersect in a way that many developers overlook: your Core Web Vitals are only as good as your server performance under real traffic conditions.

LCP depends directly on server response time. If your server responds in 200ms under light traffic but 2,000ms under peak load, your LCP degrades from good to poor for every user during high-traffic periods. The same is true indirectly for INP — if your server-rendered pages include inline JavaScript that initializes interactive components, a slow server delays when that JavaScript begins executing, pushing out the responsiveness of the first interactions.

This is exactly the scenario that load testing is designed to reveal. A Lighthouse audit runs against your server in isolation — one request, no contention, no queueing delay. It tells you nothing about how your CWV will perform during a product launch, a marketing campaign, or a seasonal traffic peak. Load testing fills that gap by simulating the concurrent traffic that degrades server performance and, consequently, degrades your real-world Core Web Vitals.

LoadForge helps you validate that your server response times stay fast under realistic traffic patterns. By running load tests that simulate your expected peak concurrency, you can confirm that your TTFB remains within the budget required to achieve good LCP scores even during your busiest periods. If your load test reveals that TTFB exceeds 600ms at 1,000 concurrent users, you know your LCP will suffer for real users at that traffic level — and you can optimize and retest before those users ever experience the degradation.

The connection between server performance and Core Web Vitals is one of the strongest arguments for making load testing a regular part of your performance workflow, not just a one-time exercise before launch. Traffic patterns change, codebases grow, and the performance characteristics of your application evolve over time. Regular load testing ensures your CWV stay healthy as your application and traffic scale.

Conclusion

Core Web Vitals — LCP, INP, and CLS — distill the user experience into three measurable metrics that directly influence search rankings and user satisfaction. Improving them requires a combination of frontend optimization (image compression, JavaScript reduction, layout stability) and backend performance (fast server response times, efficient resource delivery).

The critical insight is that Core Web Vitals are measured in the field, on real devices, under real traffic conditions. Lab scores are useful for development but do not determine your rankings or your users' experience. Ensuring good CWV at scale requires validating that your server performance holds up under concurrent load — which is where load testing becomes essential.

For a comprehensive overview of performance testing methodologies, see our performance testing guide. For practical guidance on testing your website under load, start with our guide on website load testing.

Try LoadForge free for 7 days

Set up your first load test in under 2 minutes. No commitment.