A page with high traffic and a conversion ceiling
Pulse is a product analytics platform for SaaS companies. Their growth team had built a strong SEO and paid acquisition engine — 40,000 monthly visits to the pricing page, with high-intent traffic from comparison searches and review sites. Yet trial-to-paid conversion sat at 6.4%, well below the 10–12% benchmark for this category.
The commercial stakes were direct: a 3% lift in conversion would represent approximately $780K in additional ARR at their current pricing and traffic levels. The pricing page was, by a wide margin, the highest-leverage UX problem in the business.
Friction at the moment of commitment
An initial heuristic review and heatmap analysis surfaced a cluster of compounding issues on the existing pricing page.
Finding the real conversion killers
Before running any experiments, I spent three weeks building a rigorous evidence base. CRO without research is just guessing with nicer tools.
The most telling data point: session replays showed converters spent an average of 2.1 minutes on the page before clicking "Start Trial." Non-converters spent 4.8 minutes — more time, more confusion, no conversion. More information was working against us.
A hypothesis-led approach
I structured the work as a prioritised hypothesis backlog — ranking each change by confidence (evidence strength), impact (conversion potential), and ease (implementation cost). This prevented us from running nine simultaneous changes and not knowing which one worked.
What changed and why
The feature comparison table was replaced with a three-column "feature summary" format — the 8 most-cited decision factors from user research, displayed as simple checkmarks. Users who wanted the full comparison could expand it.
Trust signals were redesigned as a horizontal band immediately below the pricing cards: five company logos, a G2 review score (4.8/5 with 340 reviews), and a single customer quote. These moved from the page footer to the decision zone.
A staged test-and-ship process
Rather than redesigning the whole page and shipping it all at once, I advocated for a sequenced A/B testing programme — running one hypothesis at a time so we could attribute impact clearly.
Conversion gains that compounded
conversion (6.4% → 7.9%)
(annualised projection)
plan selection
(from 4.8min for non-converts)
from pricing page
first-month cohort
What I learned about CRO
The most important lesson from this project: the highest-impact changes were the simplest ones. The recommended plan badge and trust signal placement — both changes that took under 2 hours to design and 4 hours to build — drove roughly 70% of the total conversion lift.
CRO isn't about clever design. It's about removing doubt at the moment of decision. The feature comparison table wasn't the problem — anxiety about commitment was. Every change that addressed the "is this safe?" feeling outperformed every change that addressed "does this look better?"
The annual plan toggle finding was unexpected: users didn't toggle it because they didn't know they wanted annual until we made it feel like the obvious choice. Framing and defaults matter more than offering choice. We could have run the whole programme as a single big-bang redesign, but the staged approach gave us something more valuable than a converted page — a repeatable methodology.