Core Web Vitals (LCP, INP, and CLS) are Google's field-measured page-experience signals. In 2026 they matter twice: AI engines like Google AI Overviews, ChatGPT, and Perplexity also deprioritize slow pages when selecting sources. Targets: LCP under 2.5s, INP under 200ms, CLS under 0.1 — measured in CrUX field data, not Lighthouse scores. Pressfit ties CWV wins to pipeline movement, not page-speed numbers.
What Core Web Vitals are and why 2026 is different
Core Web Vitals (CWV) are Google's three field-measured user-experience metrics, sourced from CrUX (the Chrome User Experience Report) at the 75th percentile of real visits:
- LCP (Largest Contentful Paint) — how fast the main content of a page renders. Target: under 2.5 seconds.
- INP (Interaction to Next Paint) — how responsive the page feels when a buyer clicks, taps, or types. Replaced FID in March 2024. Target: under 200 milliseconds.
- CLS (Cumulative Layout Shift) — how much the layout jumps around as the page loads. Target: under 0.1.
Per web.dev's Core Web Vitals overview and Google Search Central's Page Experience documentation, the three metrics represent loading, interactivity, and visual stability respectively.
Two adjacent metrics matter for diagnosis even though they are not Core Web Vitals themselves:
- FCP (First Contentful Paint) — when the first pixel of content shows up.
- TTFB (Time to First Byte) — how fast the server and edge respond.
A bad LCP almost always traces back to a bad TTFB or a render-blocking asset that delays FCP — so engineers debug all five together.
What changed in 2026 was not the thresholds but the audience. CWV used to be a Google ranking input on the organic SERP, weighted modestly against content relevance. Today, AI engines — AIO (Google AI Overviews), Perplexity, ChatGPT browsing, Gemini — also weight CWV signals when they decide which sources to crawl, cache, and cite. A slow, layout-shifty marketing page is now penalized twice: once on the blue-link SERP, once on the AEO surface where buyers increasingly start their research. That is the angle that matters for technical marketing buyers reading this in 2026, and it is the reason a CWV playbook scoped specifically to marketing sites — JS-heavy, font-heavy, hero-video-heavy — is worth writing.
LCP optimization for marketing sites
LCP measures the time from navigation start to when the largest visible element above the fold finishes painting. On a marketing site that element is almost always one of three things: a hero headline rendered in a custom web font, a hero video or animated illustration, or a screenshot of the product UI. Each of those has a -specific failure mode that generic CWV guides do not name — and HTTP Archive's Web Almanac documents that mid-market marketing sites typically miss the 2.5-second LCP threshold by 0.6 to 1.4 seconds, with image loading and render-blocking JavaScript the top two causes.
Custom font blocking. brands ship custom fonts because typography is brand. The default font-loading behavior in most Next.js + Tailwind setups still produces a render-blocking font request that delays the hero headline by 400 to 1,200 milliseconds. The fix is not to remove the brand font — it is to declare it with font-display: swap (or optional), self-host the WOFF2 file from the same origin so it skips a third-party DNS hop, and preload the file in the document head with <link rel="preload" as="font" crossorigin>. The Next.js next/font primitive automates most of that and should be the default on any new marketing build.
Hero video and animated assets. A 6 MB looping hero video pushes LCP past 4 seconds on a 4G connection and pushes mobile CrUX into the red regardless of how the rest of the page is built. The pattern that holds LCP green is to use a high-quality poster image as the LCP element (preloaded, served as AVIF or WebP, sized correctly), then load the video lazily after first paint. If the video must autoplay, compress aggressively (CRF 30+ for H.264, lower for AV1), strip audio, cap dimensions to 1280px on the longest edge, and serve from a CDN with HTTP/2 or HTTP/3.
Lazy hydration and React Server Components. Marketing pages built on Next.js App Router get a free LCP win by rendering the hero as a Server Component and reserving client-side hydration for the interactive bits below. The mistake we see most often is shipping the entire page as a single Client Component because one button needed an onClick handler. The fix is to scope 'use client' to the smallest possible boundary — the button, not the section, and never the page.
Render-blocking JavaScript. Marketing analytics tags (GTM, GA4, ad pixels, chat widgets) routinely add 600 to 1,500 KB of synchronous JS that delays LCP. Audit the GTM container against a real CrUX report — most sites can defer 70% of those tags, recover 800ms of LCP, and lose nothing in measurement quality. Behavioral intelligence telemetry should sit on top of this audit, not inside it.
Field measurement is the discipline that separates teams who fix the right thing from teams who chase a Lighthouse number. Use CrUX to segment by device class and only ship optimizations that move the 75th percentile of mobile CrUX.
INP optimization for JS-heavy marketing sites
INP (Interaction to Next Paint) is the metric that punishes marketing sites hardest in 2026. It measures the latency of the worst (or near-worst) interaction across the lifetime of a page visit — a click, a tap, a key press — from input to the next visual update. The threshold for good is under 200ms at the 75th percentile. Most marketing sites built on React, Next.js, or a heavy headless CMS clock in between 280ms and 600ms because they ship far more JavaScript than the page actually needs and run far too much of it on the main thread during interaction.
The four highest-leverage fixes:
- Cut the bundle. Most marketing sites ship 800 KB to 2 MB of JS gzipped. The right number for a marketing page is under 200 KB. Audit with
@next/bundle-analyzeror webpack-bundle-analyzer. Replace heavy date libraries (Moment) withdate-fns; replace Lodash with native ES methods; replace big animation libraries (Framer Motion on every page) with CSS where possible. Tree-shake aggressively, setswcMinifyon, and verify the result is actually smaller in production with theNEXT_PUBLIC_environment compiled out. - Break up long tasks. Any JS task longer than 50ms blocks the main thread and inflates INP. The browser cannot respond to a click while a task is running. Split work with
scheduler.yield()(where supported),requestIdleCallbackfor non-critical work, or asetTimeout(fn, 0)trampoline for legacy paths. Heavy hydration on a marketing page is itself a long task — defer it. - Move work off the main thread. Behavioral telemetry, A/B test bucketing, third-party analytics, and any JSON parsing larger than 30 KB belong in a Web Worker or a Partytown sandbox, not on the main thread. The default Next.js
Scriptcomponent withstrategy="worker"is the cheapest path to this for GTM-managed tags. - Audit React hydration cost. A pure marketing page with no above-the-fold interactivity should hydrate lazily or not at all. React Server Components on Next.js App Router make this the default for new content. Existing pages built on Pages Router can use
next/dynamicwith{ ssr: false, loading: () => null }to defer hydration of below-the-fold sections until they enter the viewport.
The -specific INP failure mode is the chat widget. Drift, Intercom, Qualified, and similar tools inject 200 to 500 KB of JavaScript and listen on every click in the document. Measure your INP with the chat widget loaded versus with it gated behind a 5-second delay — the delta is usually 60 to 120ms, which is the difference between green and red on the threshold. Gate it. Behavioral intelligence on the page does not require the chat widget to be present at first interaction; it requires it to be available when a buyer is ready to escalate.
The mistake we see most often: a marketing team A/B-tests a new hero variant, ships a 30 KB animation library to support it, lifts CTR on the variant by 8%, and silently pushes INP from 180ms to 320ms — which costs more in AI-citation eligibility and organic ranking than the variant gains. INP is a system-level metric. Treat every JS addition as a tax on it.
CLS optimization for marketing pages
CLS (Cumulative Layout Shift) measures how much visible content moves around as the page loads, weighted by the size and distance of the shift. The threshold for good is under 0.1 at the 75th percentile of CrUX. The four CLS culprits that show up on virtually every marketing site:
- Web fonts swapping in. When a custom brand font loads after the fallback system font has already painted, the headline reflows. Reserve the space with
size-adjust,ascent-override, anddescent-overrideCSS metrics that match the fallback font to the brand font's vertical rhythm. Next.jsnext/fontgenerates these automatically. Manual setups on Webflow, Framer, or hand-rolled stacks need them set explicitly. - Late-loading social proof modules. Logo strips, customer carousels, and G2 badges load after the hero and push everything below them down by 80 to 200 pixels. The fix is to reserve space with explicit
heightoraspect-ratioCSS on the container — even a placeholder skeleton holds layout. Never let third-party badge widgets compute their own height at runtime. - Cookie banners and consent UIs. A consent banner that injects above the hero after page load shifts every visible element down by 60 to 120 pixels and trashes CLS. Reserve banner space at the top of the layout from server render, or render it as a fixed-position overlay that does not push content. Either fix is a one-day engineering ticket; most teams have not filed it.
- Ads, embeds, and iframes. Demo-request iframes, Calendly embeds, retargeting pixels, and HubSpot forms all render asynchronously. Reserve dimensions on the parent container. The
aspect-ratioCSS property is supported across every modern browser and is the cleanest fix.
CLS is the cheapest of the three CWV metrics to repair — most sites can get to under 0.05 in a single sprint of CSS work. The reason it stays red on so many sites is that nobody owns it. Engineering treats it as a design problem; design treats it as an engineering problem. The fix is one person with a CrUX dashboard and a backlog.
How AI engines weight CWV in citation selection
The 2026-specific angle is that CWV is no longer just a Google ranking input. AI engines — Google AI Overviews, Perplexity, ChatGPT browsing, Gemini — fetch source pages live or from caches and use page-quality signals to decide which to cite. Pressfit's first-party study across our engagements (directional, not yet published) shows three patterns that consistently separate cited pages from passed-over pages on the same query:
- Field-measured LCP under 2.5 seconds correlates with materially higher citation rates in AIO and Perplexity on head terms, holding content quality constant. The signal is strongest on Perplexity, which fetches source pages live and times out aggressively on slow responses; pages that fail TTFB get dropped before the model sees the content at all.
- Stale content is penalized harder than slow content on AIO. A page with a
dateModifiedolder than 18 months and CWV in the green is cited less often than a fresher page with marginal CWV. The interaction between freshness and speed is multiplicative — slow plus stale is the worst combination, and it is the combination most marketing sites accumulate over time. - JSON-LD
BlogPostingandFAQPageschema, present and parsable, correlates with higher AIO inclusion. CWV alone does not produce citation; it gates whether the model bothers to evaluate the page in the first place. Schema makes the content machine-readable once it does.
The practical implication for marketing teams: CWV is no longer a hygiene checkbox you ignore once it is green. It is an AI-citation eligibility filter. Pages that fail field-measured CWV get filtered out of AEO consideration regardless of how strong the content is. That is a behavioral intelligence question — which pages get cited, on which prompts, and what page-quality signal moved the model — and it is exactly the layer most teams cannot see today.
Common marketing-site CWV mistakes
- Optimizing Lighthouse instead of CrUX. A 100 score on a developer laptop with cached assets is not what your buyers experience on a 4G phone. Page-speed wins that do not show up in field data are not wins.
- Treating CWV as a launch checklist instead of a steady-state SLO. CWV regresses every sprint as new tags, components, and content blocks ship. Without a continuous CrUX monitor wired to the engineering on-call, the site degrades quietly between audits.
- Shipping the chat widget on every page. Demo-stage pages need it; the homepage and high-traffic blog pillars usually do not. Gate by intent or interaction.
- Ignoring INP because LCP is green. LCP is the easy CWV metric. INP is where modern marketing sites actually fail in 2026, and the fixes are different — bundle size and main-thread work, not image optimization.
How Pressfit.ai approaches CWV in client engagements
Pressfit.ai's performance optimization engagement rebuilds for speed, mobile, and accessibility — measured in Core Web Vitals field data, not Lighthouse scores from a developer's laptop. The standard scope ships image, font, and bundle audits to identify what is actually slowing first paint; an edge caching and CDN strategy tuned to the traffic patterns of the actual site; a mobile-first performance pass that holds the LCP target across 4G, not just desktop; and before/after measurement captured in CrUX so the lift is visible in the real-user dataset Google actually uses.
Behavioral intelligence is the layer that turns CWV from a Lighthouse-score chase into a pipeline read. CWV is one input alongside content engagement, ICP-qualified ad performance, and pipeline contribution — a CWV improvement that does not move a downstream behavioral signal is treated as a vanity metric. The practical effect is that the LCP, INP, and CLS work happens in the same engagement as the AEO and AIO citation work and the funnel telemetry, not in isolation.
The cold-cache LCP target is under 2.5 seconds on 4G, on every page that matters to the buying journey. The audit ranks pages by traffic and pipeline contribution so the engagement focuses where the speed delta actually compounds, not where it is easiest to fix. WCAG AA accessibility lives in the same scope so the rebuild does not trade speed against the buyers using assistive technology.
Frequently asked questions
What are good Core Web Vitals thresholds for a marketing site?
The same as the universal thresholds: LCP under 2.5 seconds, INP under 200 milliseconds, CLS under 0.1, all measured as field data at the 75th percentile from CrUX. sites are not exempt from the thresholds; they are systematically worse at hitting them because of JS-heavy stacks and brand-driven font and video choices.
Do AI engines really weight Core Web Vitals when picking citations?
Yes, directionally. Pressfit.ai's first-party study across engagements shows that pages with green field-measured CWV are cited at materially higher rates by AIO and Perplexity on the same queries, holding content quality constant. Slow pages are dropped before the model evaluates content; stale pages with marginal CWV are cited least.
Why does my Next.js marketing site have a bad INP score?
Three causes, in order of frequency: (1) the bundle is too large and hydration runs long tasks on the main thread; (2) third-party tags (chat widgets, analytics, A/B test SDKs) listen on every interaction; (3) below-the-fold sections hydrate eagerly instead of lazily. Cut the bundle, defer non-critical tags to a Web Worker, and scope 'use client' to the smallest possible boundary.
Is Lighthouse enough to grade Core Web Vitals?
No. Lighthouse is a lab tool that runs on a clean machine with synthetic conditions. CWV is graded on field data from CrUX, which is what real Chrome users actually experience. Use Lighthouse in CI to catch regressions and CrUX (or a RUM tool that reports the same metrics) to grade the production state.
How often do Core Web Vitals scores regress on a marketing site?
Effectively every sprint. New components, marketing tags, and content blocks add weight; CMS editors add uncompressed images; vendors push SDK updates that bloat their tag. Treat CWV as a steady-state SLO with continuous monitoring and an on-call owner, not as a quarterly audit deliverable.
What makes Pressfit.ai different on Core Web Vitals work?
Pressfit.ai ties CWV to behavioral intelligence and pipeline outcomes, not to a Lighthouse score. The same engagement instruments AI-citation eligibility, content-block engagement, ICP-qualified ad performance, and funnel telemetry — so a CWV win shows up as a citation lift and a pipeline lift, not just a green badge.
What's next
Core Web Vitals stop being a checkbox when they get wired to AI-citation outcomes and pipeline behavior. Want to see what that looks like on your stack? Book a Pressfit.ai discovery call and we will field-measure your CWV, audit AIO citation eligibility, and map the highest-leverage fixes before recommending a single line of code. Related reading: our performance optimization product page, AI visibility, and how to rank in Google AI Overviews.