Skip to main content
consumer-decision-journey

The Modern Consumer Decision Journey across AI, Search & More

Pressfit Team13 min read

The modern consumer decision journey is fragmented across 12+ destinations: Google search, AI Overviews, ChatGPT, Perplexity, Claude, Gemini, Reddit, YouTube, Amazon reviews, paid social, and the brand site. Buyers loop between them inside a single decision. Most brands instrument only their own site, which is 20 to 40 percent of the path. Pressfit.ai uses behavioral intelligence on the brand-owned slice and AI visibility on the off-property slice.

What the modern consumer decision journey actually is

The consumer decision journey is the multi-platform research and evaluation path a buyer travels before purchase, from category awareness through post-purchase advocacy. The phrase comes from McKinsey's 2009 framework, which replaced the linear funnel with a circular loop of consideration, evaluation, post-purchase, and ongoing advocacy. The 2009 model was correct that buyers loop, share, and re-evaluate. It is incomplete in 2026 because it predates the AI-search shift and the explosion of off-property research surfaces.

The modern journey is fragmented across dozens of destinations a single buyer touches inside one purchase decision — Google search, AI Overviews, ChatGPT, Perplexity, Claude, Gemini, Reddit, niche forums and Discord servers, YouTube, TikTok, Instagram, Amazon reviews, paid social on Meta and TikTok, retargeting, email, podcasts, comparison sites, and the brand-owned site itself. The buyer does not visit them in order. They cycle between destinations, often within the same hour: ask ChatGPT a category question, click through to a Reddit thread, scroll Amazon reviews, see a retargeting ad, then land on the brand site to verify a claim.

That is not a hypothetical. It is the documented behavioral pattern from McKinsey's original Consumer Decision Journey research extended into the AI-search era, and it is the reason most brand dashboards under-count their actual influence on the buyer. The modern consumer decision journey is not a funnel. It is a destination map, and most of the destinations are off your property.

The traditional funnel and where it broke

The inverted-triangle funnel, awareness narrowing to consideration, evaluation, decision, and retention, was published by Elias St. Elmo Lewis in 1898 and lived essentially unchanged in marketing textbooks for more than a century. It described a buyer being pulled through a single, linear, brand-controlled path. It worked, roughly, in a media environment where brands controlled the channels, the comparison happened on paper, and the reviews lived in a magazine the brand could not see.

The funnel broke in three waves. The first crack was the search-engine era starting in the late 1990s: buyers started researching independently before any brand touch, and the linear model could not represent the loops that opened up. The second crack was social and review-platform proliferation from 2010 to 2018: Yelp, Amazon reviews, YouTube reviews, and Reddit threads became decision destinations the brand had no control over and limited visibility into. The third crack, the structural one, is the AI-search shift since 2023. AI Overviews, ChatGPT, Perplexity, Claude, and Gemini now answer the buyer's question before they ever visit a brand site, and they cite a small set of sources that the buyer accepts as the synthesis. The funnel diagram has no slot for any of that.

Most consumer-journey content published between 2018 and 2022 is broken on contact in 2026 because it was built for a paid-social-driven journey, where the awareness layer was a Meta retargeting cohort and the evaluation layer was a landing-page CRO experiment. That model assumes the brand can buy its way into the awareness phase, which AI-search and review-platform behavior now compress into a layer the brand cannot buy into directly. The traditional funnel survives as taxonomy, useful for naming the stages a buyer is in, but it does not describe the journey, and it does not describe the platforms where the journey happens.

McKinsey's 2009 update: closer, still incomplete

McKinsey's 2009 Consumer Decision Journey research was the most influential update to the buyer-path model in a generation. It replaced the linear funnel with four loops: initial consideration set, active evaluation (where the consideration set expands as the buyer researches), the moment of purchase, and a post-purchase experience that feeds back into the next consideration set. The CDJ correctly captured three things the linear funnel missed: that buyers add brands to their consideration set during evaluation rather than only at the start, that post-purchase behavior is part of the journey rather than after it, and that advocacy and research overlap, the same buyer who just bought is also the next buyer's reviewer.

What the 2009 framework cannot represent is the platform fragmentation that arrived after it was published. McKinsey's diagram has labels for the loops but no map of where the loops happen. In 2009, "active evaluation" happened mostly on the brand site, in a comparison-shopping engine, or in a print review. In 2026, active evaluation happens across a dozen destinations the brand cannot directly instrument: a ChatGPT thread, a Perplexity citation list, a Reddit subreddit, an Amazon review section, a YouTube comparison video, a TikTok product review, an Edelman-tracked influencer post, and the brand site as one node among many. The journey loops the way McKinsey said it would. The loops just run through platforms McKinsey's framework predates.

The behavioral intelligence frame keeps the McKinsey loop and adds the platform map underneath it. The same four-loop structure still applies; the work is to see which platforms the buyer is using to do each loop, and to instrument what you can while measuring what you cannot. That is the 2026 update.

Where buyers actually research in 2026 (the multi-platform map)

The 2026 consumer decision journey runs across dozens of destinations. Not every buyer touches all of them inside a single decision, but most considered purchases (over $50, or with health, financial, or category-defining stakes) touch 5 to 9 of them. The destinations are best read as a map, not a sequence: the buyer routes through them based on the question they have at the moment, not on a fixed funnel position.

  1. Search engines (Google, Bing, AI Overviews). The dominant entry point. Google Search Central's documentation on AI features describes how AI Overviews summarize answers above the organic results and cite a small set of sources. The buyer reads the summary and frequently never clicks through, a behavior pattern 5W Research tracks across consumer verticals. AI Overview presence is now a top-of-funnel signal in its own right.
  2. LLMs (ChatGPT, Claude, Gemini, Perplexity). Used as research engines for category questions, comparisons, and recommendations. The buyer asks a question, the LLM synthesizes an answer, sometimes with citations. ChatGPT skews toward broad synthesis, Perplexity toward citation-first answers, Claude toward longer-form analytical questions, Gemini toward Google-integrated workflows. Each platform sources differently, which is why a brand can rank organically and still be invisible in LLM answers. documents only small overlap between traditional Google rankings and LLM citation patterns.
  3. Forums and community (Reddit, Discord, Slack groups). The trust layer for category research. Buyers searching "is X worth it" or "X vs Y reddit" are looking for unfiltered peer opinion. Reddit threads now rank in Google for many product queries and are quoted directly in AI Overviews and LLM answers. Niche Discord and Slack communities are a growing surface that is harder to monitor but heavier on category-defining recommendations.
  4. Video and podcast (YouTube, TikTok, podcast platforms). Comparison videos, unboxings, long-form reviews, and category interviews. The buyer who has read about a category is now watching a creator demonstrate it or hearing a category framed in an audio interview. YouTube comments and podcast notes are secondary review and citation surfaces.
  5. Reviews and comparison aggregators (Amazon, G2, Trustpilot, Capterra, Yelp). The buyer's scoring layer. Even when the buyer does not intend to buy on Amazon, they read Amazon reviews as a reality-check on product claims; the same is true of G2 and Trustpilot for software and Yelp for local. Review-platform mention volume is now a measurement signal, not just a sales channel.
  6. Paid and social (Meta, TikTok, Instagram, retargeting). Discovery and re-engagement. The buyer encounters the brand in feed, often before any active research, and the impression sets up later branded search. Retargeting works when the buyer's research has already begun, fails when it tries to manufacture intent that does not exist yet.
  7. Owned (brand site, email, brand-owned podcast). One node, not the journey. The destination the buyer hits to verify, price, and convert, but rarely to discover.
  8. Influencer and creator content (newsletters, creator videos, sponsorships). Trust-driven third-party mentions that Edelman's 2024 Trust Barometer documents as more trusted than brand-owned advertising for many consumer verticals.

The behavioral pattern threading these destinations is non-linear and tight: a buyer can ChatGPT a question, click a citation to a Reddit thread, open Amazon in a second tab to check reviews, see a Meta ad for the brand they just searched, and land on the brand site, all inside 20 minutes. The journey is not awareness then consideration then evaluation then decision. It is a map the buyer routes through, repeatedly, with each platform answering a different question.

What brands actually see versus the full journey (the visibility gap)

If the modern journey runs across dozens of destinations, the brand-owned site is one of them. In most engagements we audit, the brand site accounts for roughly 20 to 40 percent of the buyer's actual research time, and a smaller share than that of the touches that influenced the decision. The remaining 60 to 80 percent happens off-property: in AI-search answers the brand cannot directly edit, in Reddit threads the brand cannot moderate, in review platforms the brand cannot rewrite, and in YouTube videos the brand did not commission.

The visibility gap is the difference between the slice of the journey the brand instruments and the slice that actually drove the decision. A typical brand analytics stack reads the brand-owned slice well: GA4 captures sessions, the CRM captures form fills, server-side tagging stitches identifiers across devices. None of that reads off-property activity. A buyer who spent 90 minutes researching across ChatGPT, Reddit, Amazon, and YouTube before landing on the brand site shows up in the dashboard as a single direct-traffic session that converted, with no upstream attribution. The behavioral intelligence on the brand-owned slice is high-resolution; the off-property slice is dark.

The fix is not to pretend you can instrument what you cannot. It is to add a measurement layer for the slice you can attribute, AI citation share by platform, review-platform mention volume, branded-search lift after AI presence changes, Reddit and community mention monitoring, and AI Overview presence rate, and then read the brand-owned slice against that off-property layer. Pressfit's AI visibility products measure the off-property layer on a scheduled audit cadence, with visibility audits that show citation share movement against branded-search and direct-traffic lift on the brand site.

The signals to instrument in the 2026 journey

Measurement work splits into two layers: the traditional brand-owned KPIs the funnel always tracked, and the new off-property signals the AI search era introduced. Both matter. The traditional layer reads what is happening on the brand site; the new layer reads what is happening to the brand on the rest of the journey. The behavioral intelligence layer ties the two together and reads each on its right cadence — most signals move on weekly to monthly timescales, not hourly, which is why scheduled audits are the right cadence for both.

Traditional KPIs (still essential, brand-owned slice)

  1. Organic traffic and source mix. Sessions by channel, with the AI-search traffic carved out as its own bucket where measurable (utm-tagged citations, direct traffic spikes after AI mentions). Per Google Search Central's documentation, AI Overviews don't pass referrer reliably, so direct-traffic shifts after AI presence increases are the cleanest read.
  2. Branded vs non-branded search volume. The classic split. Off-property awareness work shows up as branded-search lift weeks before direct traffic does.
  3. Conversion rate (CVR) by source. Per-channel conversion at the same conversion event lets you compare quality across paid, organic, AI-referred, and direct.
  4. Customer acquisition cost (CAC) and CAC payback. Total acquisition spend divided by net new customers, with payback period the lagging confirmation that the spend was efficient.
  5. Average order value or annual contract value (AOV / ACV). The denominator that makes CAC interpretable. CAC payback is misleading without it.
  6. MQL volume and SQL conversion rate. Still the primary handoff KPIs in B2B; less central in B2C ecommerce where checkout is the equivalent event.
  7. Email engagement (open rate, click rate, reply rate). The owned re-engagement layer. Reply rate, where measured, is the cleanest engagement signal because most automated opens are filtered out.
  8. Retention and repeat-purchase rate. The post-purchase telemetry that anchors lifetime value. Funnels that turn off at conversion give up the strongest signal for renewal and expansion risk.
  9. Net promoter score (NPS) or category equivalent. The sentiment KPI most teams already collect. Useful as a directional check on review-platform sentiment, not a substitute for it.

New signals (off-property, AI-search era)

  1. AI citation share, per platform. The percentage of relevant prompts in your category where your brand is cited in the AI answer. Measured separately for ChatGPT, Perplexity, Claude, Gemini, and Google AI Overviews because each platform sources differently. The signal moves on weeks-to-months timescales tied to publishing and entity-graph updates, not on hours, which is why a scheduled audit cadence is the right read.
  2. AI Overview presence and citation rate. The percentage of target queries in your category where Google shows an AI Overview, and within those, the percentage where your brand is cited. Tracked on a weekly audit cadence.
  3. Review-platform mention volume and sentiment. Mention counts and sentiment shifts on Amazon, G2, Trustpilot, Yelp, and category-specific aggregators. Predicts AI citation behavior because LLMs cite review content heavily, and predicts branded-search shifts when sentiment swings.
  4. Community mention monitoring. Reddit thread mentions, Discord and Slack community references, X/Twitter mentions in category-relevant communities, and creator-platform commentary. The signal that catches a buyer-perception shift before any brand-owned analytics will.
  5. Branded-search lift after off-property activity. The leading indicator that off-property awareness work is paying off. When AI citation share rises and review sentiment improves, branded search query volume rises before direct traffic does. Branded-search lift sits at the intersection of off-property and on-property work — it is what shows up in the brand-owned analytics when the rest of the journey is moving.
  6. Influencer and creator mention tracking. Newsletter mentions, creator review videos, podcast guest appearances, and sponsored content. Tracked as a third-party trust signal, with Edelman's 2024 Trust Barometer as the framing reference for why these mentions outperform brand-owned advertising in many verticals.

Neither layer replaces the other. The traditional KPIs answer "how is the brand-owned slice performing?" The new signals answer "is the rest of the journey moving?" The behavioral intelligence layer is the read against pipeline outcomes — which off-property signals correlate with on-property buying behavior, and on what cadence — so the brand can prioritize the off-property work that actually shifts the decision.

How AEO and GEO architecture map to the modern journey

Answer Engine Optimization (AEO) and Generative Engine Optimization (GEO) are the working terms for the discipline of being cited and quoted by AI search surfaces. Read against the modern journey, AEO is not an SEO tactic, it is a journey discipline: it is the work of being the source the AI engine quotes when the buyer asks the category question that opens their decision.

The architectural pattern is consistent across the AI surfaces that publish guidance. Pages need a clear answer in the first paragraph that an AI engine can extract verbatim, structured headings that match the buyer's actual question phrasing, FAQ sections with self-contained answers that schema.org's FAQPage type can mark up, JSON-LD blocks for BlogPosting and FAQPage, and outbound citations to authoritative sources that signal the page is grounded rather than synthetic. Google Search Central's AI-features documentation describes the surfaces, and the same patterns generalize across ChatGPT, Perplexity, Claude, and Gemini citation behavior because all of them prefer answer-shaped, well-structured, citation-rich pages.

GEO extends the same discipline to the wider answer-engine surface, including AI assistants embedded in browsers, voice assistants, and embedded chat in Google's own products. Read against the journey, AEO and GEO together are the work of inserting the brand into the AI-search node of the destination map. They do not replace the brand site work; they ensure that when the buyer's journey passes through an AI surface, the brand is in the answer.

The companion B2B counterpart, the funnel-and-stage view of pipeline-driven buying, is covered in the sales funnel stages: a behavioral-signal framework piece (PLG-3006) and the long-cycle deep-dive in B2B sales funnel for long cycles (PLG-3004). The modern consumer decision journey is the consumer counterpart: same brand frame, different platform expression.

How Pressfit.ai reads the modern journey in client engagements

Pressfit.ai engages on the modern consumer decision journey as a managed service, with two layers wired together. The brand-owned slice is instrumented through behavioral intelligence: GA4 events, server-side tagging, internal site search, page-sequence tracking, and stage-tagged content clusters that score the buyer against where they actually sit in the journey. The off-property slice is measured through the AI visibility products: AI citation share by platform, AI Overview presence rate, review-platform mention monitoring, and the branded-search lift signal that ties the off-property work to on-property pipeline.

Engagements run on scheduled audits and deliverable-based reporting, not always-on monitoring. The behavioral intelligence frame is what holds the two slices together: every signal, on-property or off, is validated against a pipeline or revenue event downstream, so the dashboard reads what actually drove the decision rather than what the channel attribution rules guessed at. The frame is grounded in telemetry the buyer actually generated, tied to outcomes rather than impressions, and delivered as scoped engagements with named deliverables.

AI visibility is the off-property measurement layer. Behavioral intelligence is the brand-owned read. The modern journey is what they map to: a fragmented destination map where the brand instruments what it can see and measures what it can attribute, and accepts that the majority of the journey runs through platforms it does not control.

What's next

If your team is reframing measurement for the AI-search era, the fastest path is a Pressfit.ai discovery call. You will leave with a read on which destinations in the modern journey your dashboards are blind to, which AI-search and review-platform signals you should be scoring against, and a scoped recommendation for closing the visibility gap.

Book a discovery call

FAQ

What is the modern consumer decision journey?

The modern consumer decision journey is the multi-platform research and evaluation path a buyer travels before purchase, fragmented across dozens of destinations, including Google search, AI Overviews, ChatGPT, Perplexity, Claude, Gemini, Reddit and other forums, YouTube, TikTok, Amazon reviews, paid social, and the brand site. Buyers loop between destinations inside a single decision rather than moving through a linear funnel, and the brand instruments roughly 20 to 40 percent of the path on its own property.

How is the 2026 consumer decision journey different from McKinsey's 2009 model?

McKinsey's 2009 Consumer Decision Journey correctly described the buyer's loop, evaluation expanding the consideration set, post-purchase feeding the next decision, but the loops in 2026 run across AI-search engines, Reddit, review platforms, and creator content that the 2009 framework predates. The four-loop structure still applies; the destination map underneath it is new and is where most of the modern journey now lives.

Why does the traditional inverted-triangle funnel break in 2026?

The inverted-triangle funnel assumes a linear path the brand can pull buyers through and a media environment the brand controls. AI search, LLMs, and review platforms broke both assumptions. The buyer's awareness and evaluation now happen on platforms the brand cannot buy into directly (AI Overviews, ChatGPT, Reddit, Amazon reviews), and the journey is a destination map rather than a stage sequence.

Which signals should consumer brands instrument in the modern journey?

Five signals on top of traditional funnel telemetry: AI citation share per platform (ChatGPT, Perplexity, Claude, Gemini, AI Overviews), review-platform mention volume and sentiment (Reddit, Amazon, YouTube comments), branded-search lift, AI Overview presence rate, and community mention monitoring. None of these require always-on dashboards; scheduled audit deliverables read them correctly, and the behavioral intelligence layer ties them to pipeline outcomes.

How big is the visibility gap between the brand-owned slice and the full journey?

In most engagements we audit, the brand-owned site accounts for roughly 20 to 40 percent of the buyer's actual research time. The remaining 60 to 80 percent happens off-property: AI search, Reddit, review platforms, YouTube, and creator content the brand does not directly instrument. Closing the gap is not about instrumenting what you cannot see; it is about adding a measurement layer (AI citation share, review mentions, branded-search lift) for the slice you can attribute.

What makes Pressfit.ai's approach to the modern journey different?

Behavioral intelligence on the brand-owned slice and AI visibility on the off-property slice, wired together. Pressfit.ai measures AI citation share, AI Overview presence, review-platform mentions, and branded-search lift on a scheduled audit cadence, then validates each signal against a pipeline or revenue event on the brand site. The frame keeps the McKinsey loop and adds the destination map underneath, with every score grounded in buyer telemetry rather than channel-attribution guesses.

Want to see behavioral intelligence in action?

Book a pipeline review and we will show you what your buyers actually respond to.

Get Onboarded