New data shows that two-thirds of your most valuable customers take journeys your analytics were never built to see. The implications for AI agent design — and the analytics infrastructure behind them — are significant.
I’ve spent a long time thinking about how to measure experience — what it means to truly see a user’s journey rather than approximate it. At Conviva, we built our entire platform on the belief that stateful, full-census telemetry is the only honest way to understand what customers actually do. Event counts and funnel stages flatten the story. They’re not a simplification — they’re a different story entirely.
We just released data that makes this concrete in a way I haven’t seen before. Across two independent datasets — e-commerce retail and online travel booking — we found the same structural pattern in both:
67% of digital consumers do not follow the linear purchase path that traditional product analytics is built to measure. The funnel describes the minority.
E-commerce: 65% non-linear. Travel booking: 70% non-linear. Two completely unrelated industries, two independent datasets, same finding. That cross-sector consistency is what makes this structural rather than a category quirk. This is how people make considered purchase decisions online.
This matters enormously right now — not just for product analytics, but for everyone building and deploying AI agents. Because if 67% of your customers behave in ways your analytics can’t see, the same 67% are behaving in ways your agents can’t see either. And a blind agent doesn’t just fail to help — it actively gets in the way.
The cart isn’t where customers abandon. It’s where they think.
Here’s what I find most striking in the data. The shoppers most analytics tools flag as ‘high abandonment’ — consumers who visit their cart five or more times — aren’t abandoning at all.
40% Conversion rate for shoppers with 5+ cart visits. Nearly double the rate of first-time cart visitors. The “most abandoned” cohort is actually your highest-intent buyers.
Compare that to a 23% conversion rate for shoppers who visit the cart just once. The consumers who keep coming back — who treat the cart as a staging area, a comparison tool, a decision space they can return to across sessions — are far more likely to buy. Funnel-based tools record every return visit as abandonment. Stateful pattern analytics reads it as what it is: active consideration.
Now think about what happens when you deploy an AI shopping agent trained on funnel logic. It sees a customer on their fourth cart visit and triggers an intervention — a discount, a nudge, a ‘still thinking about it?’ prompt. It’s not helping. It’s interrupting a buyer who was already going to buy, at exactly the moment they were working through their decision. The agent got the signal backwards.
This is the problem context graphs are designed to solve. A context graph preserves the state of what happened earlier in the session — and across sessions. It tells the agent: this person has been here before, this is their third visit this week, they’re deep in consideration. Hold off. Wait. Context is the difference between a helpful agent and a disruptive one.
Loops are buying signals, not drop-offs.
The same pattern shows up in search behavior. Nearly half — 49% — of converting e-commerce sessions include at least one return-to-search loop: the consumer moves from product pages back to search results, then forward again. In our data, they’re the defining behavior of buyers:
19% 3+ search loops → 19% conversion rate. Each additional loop raises conversion probability. 1 loop: 13%. 2 loops: 16%. 3+: 19%. Every “drop-off” in this cohort was actually a buying signal.
Travel tells an even starker version of this story. The average confirmed booking involves 3.5 searches. Buyers initiate checkout twice on average before confirming. And 36% of travel converters required multiple sessions — returning an average of 4.25 days after their first search. Session-level analytics renders these customers invisible between visits. They look like they left. They didn’t leave. They’re still deciding.
A consumer-facing AI travel agent with no context graph is starting fresh every time that customer returns. It doesn’t know they searched twice last week. It doesn’t know they got to the payment page and backed off when they saw the full price — which is what 50% of travel users who reach payment do. Without that context, the agent can’t meet them where they actually are in their decision. It can only guess.
The right framework: context graphs connect the signal.
At Conviva, we think of context graphs as the structured state and relational context that AI agents maintain across a conversation or multi-step workflow. They preserve intent, prior actions, tool calls, and outcomes — not just the last message, but the arc of the interaction. For consumer-facing agents, that arc often spans multiple sessions, multiple devices, and multiple days.
Context graphs are necessary. But they’re not sufficient on their own.
The question I keep coming back to is: how do you know if the context graph is working? You need to measure not just whether the agent has context, but whether that context translated into the right action — and whether that action drove the outcome you were trying to achieve. Did the agent correctly read the high-cart-revisit customer as high-intent and hold its intervention? Or did it interrupt anyway? Did the returning travel shopper get a contextually relevant response, or a generic one?
This is the measurement gap that Agent Experience Analytics (AXA) is built to close. Context graphs give agents memory. AXA tells you whether the memory made them better — by connecting agent behavior directly to the business outcomes that matter: conversion, resolution, retention, revenue.
One more thing the data reveals: the AI-referred customer.
There’s a fourth journey pattern in our data that I think is underappreciated and directly relevant to where the market is going. We call it the Intent-Led Arriver: a consumer who completes their research in an AI search tool — ChatGPT, Perplexity, Gemini — and arrives at your site having already made the hard decision. They enter mid-funnel or directly at the product page.
2–3× Conversion rate for AI-referred arrivals vs. organic search. Last-click attribution collapses this signal into “direct” or “referral.” It disappears entirely from funnel models.
These customers arrived because an AI agent recommended your product. Their context graph started before they ever touched your site. The challenge — and the opportunity — is building the analytics infrastructure to recognize them, serve them appropriately, and learn from what happened. That requires continuity of context across the handoff from the AI search agent to your own product experience.
This is the direction the whole space is moving. Not discrete sessions, but continuous journeys that start in AI search, move through your product, and are shaped at every step by agents that have — or don’t have — the context to be useful.
What product leaders should take away.
I’m speaking at ProductCon in New York on May 20, and this data will be central to what I’m bringing to that room. The product leaders there are building and deploying AI agents right now. They’re grappling with measurement gaps, with analytics infrastructure, with whether their agents are actually helping or just generating activity.
The data from our State of Digital Experience Report isn’t just a critique of funnel analytics. It’s a map of what’s actually happening in consumer purchase journeys — and a case study in why context, state, and stateful measurement matter at every layer of the stack. We’ll be diving into it at ProductCon next month. If you’re there, find us at the booth or come to the panel.
Aditya Ganjam is Co-Founder and Chief Product Officer at Conviva, where he leads product strategy. Aditya co-founded Conviva in 2006 alongside Hui Zhang, Ion Stoica, and Jibin Zhan, and has spent nearly two decades building the analytics infrastructure that connects digital experience to business outcomes. He will be speaking at ProductCon New York on May 20, 2026.