Product Analytics Captures Your Events. It Doesn’t Understand Your Customers.

If you read our piece earlier this week on why 67% of digital consumers are invisible to funnel-based analytics, you left with a number that’s hard to put down. Shoppers who visit the cart five or more times — the users your platform is most aggressively trying to recover — convert at 40%. Search loops, which every funnel report records as drop-offs, are among the strongest purchase intent signals in the dataset.

The natural follow-up isn’t is this true? It’s: how is this possible? You’re running a sophisticated analytics stack. You’re capturing events at scale. How are the most predictive behavioral signals hiding inside data you already have?

The answer comes down to a single distinction that most product teams have never had reason to think carefully about: capturing events is not the same as understanding behavior. These are different things. Most analytics platforms do the first. Almost none do the second.


What product analytics tools actually capture

Product analytics platforms like Amplitude and Mixpanel are event-based systems. They’re designed to capture discrete user actions — a page view, a search query, an add-to-cart, a session start and end — and store them as timestamped records. On standard configurations, these tools capture every instrumented user action as an event, allowing teams to analyze product usage and user journeys at the individual user level.

This process has a fundamental limitation that explains the 67%.

Both platforms only capture events that have been instrumented. Someone on your team decided which actions to track, wrote the code to track them, and defined the event schema. Everything outside that schema is invisible. More critically: even for every event that is captured, the stored record is a row in a table. Each event is independent. There is no built-in relationship in the data model between event 47 and event 48, between what a user did yesterday and what they’re doing today, between a search that happened three minutes before a cart visit and the cart visit itself.

This is what “stateless” means in the context of product analytics. Not that the events are missing. It means the system has no persistent memory of what state a user was in when each event fired. Each event is processed as if it were the first thing that the user ever did.


Why stateless data produces the wrong answer for most product questions

For simple counting questions, statelessness is fine. “How many users viewed this product page today?” is a count — you don’t need state to answer it. “What is the overall conversion rate this week?” is a ratio — same thing.

The limitation becomes serious when the question requires understanding what a user was doing, not just what a user did. And that distinction matters for almost every product decision that actually moves revenue.

Consider two shoppers who both show up in your data as cart abandonment events.

The first shopper clicked a paid ad, landed on a product page, added the item to cart, and left. One session, no purchase. Most product analytics tools will capture this accurately. The event record is there.

The second shopper has a much longer trail across your database: three sessions spread over four days, a dozen search queries, twenty-plus product page views, multiple returns to the cart, wishlist activity, comparison behavior across several items. Every single one of those actions is also in your event database — the searches, the product views, the cart visits, all of it. And then, like the first shopper, a cart session with no purchase.

Here is what happens next. When your analytics tool runs its cart abandonment query, it looks for a specific event — cart session with no purchase event — and both shoppers have it. The query returns both of them in the same cohort. Because the query is looking at an event. It has no built-in mechanism to recognize that the second shopper is carrying four days of accumulated consideration context — because that context isn’t a stored state anywhere in the system. It’s a pattern distributed across dozens of independent rows in a table, which would have to be manually reconstructed for every user, every time, through custom SQL that almost no one writes at scale.

So your abandonment recovery campaign reaches both shoppers. Your cart optimization is evaluated against both shoppers’ sessions. Your conversion benchmarks include both in the same denominator. And your product decisions are made against a model of “cart abandoners” that systematically includes your highest-intent buyers.

Tools designed to count events simply cannot answer questions about behavioral intent.


The question stateless analytics cannot answer

Stateless analytics answers: what did this user do?

Stateful analytics answers: what was this user doing — and what does that tell us about what they’re likely to do next?

That’s the difference between a list of facts and an understanding of behavior.

Consider what it actually takes to know that a shopper with five cart visits is in a high-intent consideration state, not an abandonment state. You’d need to know not just that they visited the cart five times, but that those five visits happened across multiple sessions over several days, that each cart visit was preceded by a return to product pages, that the time between sessions is consistent with a deliberation pattern. You’d need to know their current state — where they are in a continuous behavioral arc — not just their last recorded event.

Or take search loops. When a shopper goes from a search results page to a product page and clicks back to search, that backward navigation is a single click. In a stateless system it’s recorded as a page view and a navigation event, with no built-in relationship to the search session it belongs to or the product consideration it followed. Aggregated across millions of sessions, backward navigation gets treated as noise or, at best, as a bounce. But in the research behind our State of Digital Experience 2026 report, shoppers who ran three or more search loops converted at 18.59% — 25 times the session average. The behavioral signal is encoded in the event trail. The meaning of it requires reading the sequence.

This is what the 67% finding is actually measuring. It’s not that 67% of your users are doing things your analytics tool fails to record. It’s that 67% are taking paths whose meaning depends on sequence, duration, and accumulated behavioral context — and stateless tools, by design, don’t preserve those dimensions. The data is there but the understanding is not.


What stateful, time-sequence analytics does differently

A stateful analytics system maintains a continuously updated model of where each user is in their behavioral journey — not as a post-hoc reconstruction from event logs, but as a live record that updates in real time as each event fires.

The difference this makes is concrete. When a shopper’s fifth cart visit hits the system, a stateful platform already knows they are in their fourth day of consideration, that their sessions are getting shorter and more cart-focused, that this pattern — extended multi-session research converging toward a specific product — is a behavioral state associated with high purchase probability. That context is available at query time without any reconstruction, because it was maintained continuously as the journey evolved.

This is the concept behind time-state analytics: the continuous tracking of how a user’s behavioral state changes over time, with each event understood in the context of the state that preceded it. Every user journey can be modeled as a sequence of states with meaningful transitions — browsing → researching → comparing → staging → committing — with loops, returns, and revisits that are themselves informative signals, not noise. A system built to track those states can answer questions that a system built only to log events cannot.

This is also what makes pattern detection possible at scale. A behavioral pattern isn’t a single event — it’s the full arc of what happened, in what order, with what time between each step. Detecting that “five cart visits across multiple sessions” is a high-intent cohort requires seeing all five visits as a continuous behavioral state, not as five independent rows in a database. Detecting that search loops predict purchase intent requires treating backward navigation as a meaningful state transition, not discarding it as noise in aggregation.

And full-census capture matters here in a way that goes beyond the sampling question. Pattern detection within narrow, high-value cohorts — “shoppers in a multi-session consideration arc for a specific product category,” for example — requires every session in order to maintain statistical confidence. Sampled data systematically loses the rare-but-predictive patterns that represent the highest-intent behavior in your user base.


Why this gap is widening, not closing

The stateless/stateful distinction has always been a real limitation of event-based analytics. What’s changing is the cost of that gap.

AI shopping agents — the tools increasingly mediating the first stage of consumer purchase research — are evaluating digital experiences based on behavioral signals: does this product return relevant results? Does this checkout flow complete without friction? Do consumers with high intent leave with what they came for? Those are sequential, contextual, cumulative signals. They are the signals stateless systems cannot reliably produce, because they require understanding behavioral arcs, not counting events.

A digital product that can’t measure how 67% of its own users behave cannot optimize for the experience quality that will determine how those agents route traffic. The measurement gap that has always cost conversion revenue is becoming a competitive infrastructure gap as well.

Events are what happened. Behavior is what it means. Most analytics tools have only ever done the first job. In the era of AI-mediated commerce, the second job is the one that matters.


Download the State of Digital Experience 2026 reportThe data behind the 67% finding, with full methodology.

Attending ProductCon on May 20 in NYC? Conviva’s team is on the floor. Let us show you what we see in your data when we look beyond the funnel. Want a free ticket? Get one here.