Product Analytics is Giving You Funnel Vision

Let’s talk about the users your analytics platform has flagged as your worst offenders — the ones who keep coming back to the cart and never checking out. The repeat visitors. The ones triggering your abandonment recovery emails.

According to new data from Conviva, those users convert at 40 percent.

First-time cart visitors, by comparison, convert at less than 25 percent. Your “abandoners” are outperforming your one-and-done visitors by nearly double — and your analytics stack has been sending them discount codes and win-back sequences while they were actively deciding to buy.


This is what funnel vision looks like.

Funnel vision isn’t a configuration problem or a dashboard gap. It’s a structural consequence of using a measurement model designed for a straight-line journey to describe a buying behavior that is almost never straight-line. When your analytics tool can only show you what passes through the funnel’s defined stages — in order, in a single session, without looping back — then everything outside that narrow field of view is invisible. And “invisible” doesn’t mean unimportant. It means invisible while being your majority.

Conviva analyzed sessions across two independent industries: large-scale e-commerce and travel booking. These are different products, different price points, different decision cycles. Across both of them, the same structural finding held: only 33% of consumers followed the straight-line funnel path. The other 67% took routes that funnel-based analytics is architecturally unable to capture — iterative search loops, mid-funnel re-entry, repeated cart visits, and multi-session consideration arcs spread across days.

Two unrelated industries. The same 67%.


What the data actually shows

The 40% cart-visit finding is striking on its own. But it’s one signal in a sequence that points in the same direction.

Almost half of all buyers loop. In the e-commerce dataset, 49% of converting sessions include a return-to-search loop — the shopper goes back to browse after viewing a product, then returns again. In a funnel model, that backward navigation is recorded as a drop-off. It isn’t. It’s the primary behavior of the most likely buyers in the data. And conversion rate scales with loop depth: one search loop produces 13% conversion, two loops produce 16%, three or more loops produce 19% — 25 times the session average. Funnel analytics records each loop as abandonment.

The cart is a research tool, not a checkout queue. 65% of cart sessions loop back to product browsing after the most recent cart visit. Not 5%. Not 10%. Nearly two thirds. The cart is where people park what they’re thinking about while they keep evaluating. Treating cart exit as intent to abandon is not a measurement imprecision. It’s a systematic inversion of the signal.

Longer sessions and more page views predict conversion, not the reverse. Product page depth correlates monotonically with purchase: 1–5 pages yields 3.5% CVR; 31+ pages yields 19% — a 5.5× difference. In session-duration and bounce-rate terms, these are your “low-engagement” users. In outcome terms, they are your most likely buyers.

Multi-session consideration is normal, not unusual. In the travel dataset, 36% of converters required multiple sessions, returning an average of 4.25 days after their first search. In e-commerce, cart revisit patterns confirm the same dynamic: more than 3.5 cart revisits per shopper on average before purchase. These journeys don’t exist in session-level analytics — each session resets, and the multi-day arc is never reconstructed.

Every one of these signals requires understanding the sequence, depth, and context of a user’s journey. None of them are measurable with stage counts.


The compounding problem

Here’s where funnel vision becomes more than a measurement gap.

Every product decision informed by funnel data is optimized for the 33%. Feature prioritization. Checkout redesign. UX investment. A/B test interpretation. Campaign attribution. Personalization models. All of it is evaluated against metrics that systematically misrepresent the majority of your users as low-intent — while the majority is doing the highest-intent things in your data.

The errors don’t stay isolated. They compound. You optimize checkout for the linear user. You trigger abandon recovery for the deep-consideration user. You deprioritize wishlist and save features because funnel metrics show low stage-conversion from those touchpoints. You measure “engagement” as session duration, conclude that long sessions mean low efficiency, and optimize for shorter ones — moving directly against conversion.

And then there’s what’s coming.

AI shopping agents — the kind being built on and for the infrastructure that much of your digital product runs on — are already mediating a growing share of purchase research. Consumers who arrive from AI search (ChatGPT, Perplexity, and their successors) convert at rates two to three times organic search averages in this dataset. They’ve already done their research elsewhere. They arrive knowing what they want.

These agents are learning, at scale, which digital products deliver excellent experiences and which ones don’t. They route traffic to products that perform well on the dimensions their users care about. And they will not return to products that disappoint.

You cannot build an experience that earns that traffic if your analytics stack can’t see how 67% of your users actually behave. You can only optimize the 33% the funnel was designed to measure. The agents don’t care about your funnel. They care about the experience.


What the right measurement model must do

The pattern across every high-conversion signal in this data is consistent: each one requires knowing the sequence, depth, and context of a consumer’s journey — not just which stage they reached.

Funnel analytics counts heads at tollgates. Pattern analytics maps the road.

That’s not a qualitative distinction. It’s an architectural one. A tool built on event snapshots — independent records with no persistent state between sessions — cannot reconstruct a multi-day consideration arc. It cannot distinguish a cart visit at the end of a three-loop research session from a cart visit on direct entry. It cannot tell you that a user who has visited their cart four times and run two search loops in the last two sessions is in deep consideration, not abandonment. It sees the same event. It counts it the same way.

The analytics model that can see the 67% has to preserve the full context of a user’s journey over time — the sequence, the state they were in, how that state changed, and how long they’ve been in it.

But the starting point isn’t the architecture. It’s the data. If 67% of your most likely buyers are invisible to your current stack, the question isn’t which dashboard to fix. It’s whether the model underneath the dashboard was built to see them at all.

The funnel was always a simplification. The data now shows it has become a liability.

Download the State of Digital Experience 2026 reportThe data behind the 67% finding, with full methodology.

Attending ProductCon on May 20 in NYC? Conviva’s team is on the floor. Let us show you what we see in your data when we look beyond the funnel. Want a free ticket? Get one here.