Case Study

FunnelKit One-Click Upsells

Generating $47M in post-purchase revenue through behavioural design

FunnelKit One-Click Upsells hero
The challengeWooCommerce stores had no post-purchase strategy. Customers with payment saved and intent still active were leaving on a static thank-you page. Existing upsell plugins either killed conversion with friction or repelled customers with aggressive tactics.
My roleLead Product Designer, behavioural research, design strategy, A/B testing framework, ethical disclosure design
Team1 PM, 4 Engineers, 1 Data Analyst
Timeline7 months, March - Sept 2024
ToolsFigma, Optimizely, Hotjar, Amplitude

My design philosophy: The gap between how users think and how systems are built is almost always the real problem. In post-purchase design, that gap is timing. Systems offer immediately. Customers need a moment first.


The Opportunity Nobody Was Taking


The 30 seconds after a customer completes a purchase are unlike any other moment in e-commerce. Payment is saved. Intent is high. Positive commitment mode. Industry research puts post-purchase upsell acceptance at 25-35%, higher than cold traffic (2-4%), higher than email campaigns (8-12%).

Yet 94% of WooCommerce stores were showing customers nothing but a static thank-you page.

The market had three options, all flawed.

  • Plugin A: Required customers to re-enter payment info, conversion dropped to ~8%, defeating the purpose
  • Plugin B: Genuinely one-click, but showed random offers with zero targeting logic
  • Plugin C: Smart targeting, but 45-minute setup, most merchants never activated it

Nobody had combined true one-click purchase + relevant targeting + sub-5-minute setup. That was the gap.


The Central Hypothesis


Post-purchase upsell acceptance is not primarily a product or pricing problem. It is a timing and framing problem.

I built this hypothesis before any design work began, then structured the entire project around testing it.

The timing test: I showed 30 shoppers the same upsell offer, identical product, price, and design, at different points after a completed purchase. Participants were recruited via UserTesting (screener: purchased online in the last 7 days).

When shownAcceptance rateWhat customers said
0-30 seconds18%"I just spent $80. Now you want more? I'm done."
3 seconds31%"Oh, this would actually go perfectly with what I bought."
5+ seconds19%"I'd already moved on mentally."

Same customer. Same offer. Same design. The difference was entirely when.

The framing test: During beta, I ran framing tests with 50 participants across two rounds of 25 to validate before committing to the copy direction.

FramingAcceptance
"Complete Your Order"31%
"You Might Also Like"24%
"Don't Miss This Deal"18%

13 percentage points from word choice alone. Copy is not decoration in behavioural design. It is the mechanism.

Both tests confirmed the hypothesis. The project had a clear design target: deliver an offer that feels like help at exactly the right moment.

Timeline diagram, buyer psychology states with acceptance rates plotted at each interval


What Existing Products Got Wrong


Reviewing 500 post-purchase sessions on Hotjar and interviewing 30 customers who declined upsell offers, the failures had three consistent causes.

Predatory framing destroyed trust. Competitor upsells appeared immediately after payment on full-screen blocking pages: "WAIT! EXCLUSIVE OFFER, LAST CHANCE BEFORE IT EXPIRES!"

"This feels like a bait-and-switch. I just paid. Now they're trapping me with another offer."

The intent was urgency. The effect was resentment.

Irrelevance made offers invisible. Most plugins showed bestsellers regardless of what was in the cart. A yoga mat buyer seeing a jump rope isn't a recommendation. It's noise that signals "we don't know anything about you."

Friction collapsed conversion. One-click: 25-35% acceptance. Re-entry of payment required: 6-9%. Same product, same timing, 4x difference. Any friction at this moment breaks the psychological window.

Best-in-class examples told the opposite story. Amazon's "Frequently bought together" uses co-purchase data and informational framing. Dollar Shave Club's "Complete your kit" positions additions as finishing something. Both feel like help, not sales. That distinction drove every design decision.

Three failure modes with annotated competitor screenshots and best-in-class examples side by side


Testing the Hypothesis


Test 1: Fullscreen takeover

After checkout: full-screen offer page. Clear hierarchy. Prominent CTA.

10 customers. 8 of 10: "pushy." 6 of 10 were scanning for a skip button before reading the offer.

Finding: fullscreen pages trigger a defence response. Anything blocking order confirmation reads as threatening, not helpful. The timing hypothesis was right, but this delivery method activated the wrong psychological state regardless of timing.

Fullscreen heat map, eye tracking showing skip-button scanning pattern

Test 2: Embedded in the thank-you page

Offer below order confirmation. Non-blocking. No interruption.

Better reception, but 5 of 8 missed the offer entirely.

Finding: prominence competes with confirmation. Subtle enough not to compete = invisible. I needed something that waited for the right moment rather than existing statically on the page.

Test 3: Sequential reveal at 3 seconds

Informed by the timing test data, I separated the two jobs entirely: confirm the order first, surface the offer second.

The thank-you page loads normally. Full order confirmation. Primary job done completely.

After 3 seconds, when the timing data showed customers shift from closing to validation mode, a card slides in from the right edge:

"Complete Your Order" "Customers who bought [product] also loved:" [Product image, Title, Price] [Add to Order, $29.99] / [No thanks]

Why slide-in from the right: Matches notification behaviour users already know. Non-blocking. Spatially dismissible. Consistent with the Sliding Cart's drawer pattern, customers in the FunnelKit ecosystem already had this mental model.

50-user test of the final design: 31% acceptance. 94% offer visibility.

Sequential reveal, annotated timing diagram showing confirmation then 3 seconds then slide-in


Design Principles That Guided Every Decision


Relevance over volume. Five targeting rules, all derived from the same two sources: co-purchase pattern analysis of FunnelKit merchant sales data, cross-referenced with a merchant survey asking "how do you currently decide what to recommend after a purchase?"

  • Product-specific (yoga mat to yoga blocks): Most common pattern in both data and merchant behaviour
  • Category-based (coffee to brewing equipment): Merchants called these "natural product families"
  • Cart value targeting (orders over $100 to premium upgrade): Higher-value orders showed different co-purchase patterns in the data
  • Customer history (first purchase vs. repeat): Repeat customers bought differently, less price-sensitive, more likely to try new product lines
  • Bundle pricing (complete the kit, save 15%): Merchants using manual bundles consistently reported higher post-purchase satisfaction

Merchants configure this through a five-question setup form. The conditional logic underneath is complex. That's the point.

One downsell maximum, tested, not assumed. When a customer declines, one lower-priced alternative appears. Beta data: 18% of declines accepted the downsell. I tested two downsells in sequence with a 20-user subset. At the second offer: 94% "No thanks" rate, and 7 of 20 expressed annoyance unprompted. "Now I feel like I'm being nagged." The trust cost exceeded the revenue value. Maximum one downsell is a rule grounded in testing, not instinct.

Iterative beta loop. 200 merchants over 8 weeks. Each wave of feedback produced a direct design response.

  • Weeks 1-2: "I don't know which products to offer" to pre-configured offer templates
  • Weeks 3-4: "I want to test offers" to built-in A/B testing with auto-winner declaration
  • Weeks 5-6: "Mobile offers feel cramped" to redesigned for 85% screen width, larger touch targets

Targeting setup, 5 questions mapped to rule engine, downsell flow, mobile redesign before/after


The Ethics Section


Legal review required explicit charge disclosure before any one-click purchase.

Initial button: "Add to Order" Required: "Add to Order, $29.99" with microcopy "Your saved payment method will be charged."

Acceptance rate dropped 4% (31% to 27%).

We kept the disclosure.

This decision deserves its own section because it's the one I'm most proud of. 27% with full transparency is better than 31% with a legal and trust risk. These are customers' credit cards. They deserve to know what they're agreeing to before they tap. And a designer who optimises conversion at the expense of user trust isn't doing their job, they're doing the job wrong.

Senior designers hold positions that aren't always the most commercially expedient. This was one of those moments.

Button design, before/after with legal annotation and conversion comparison


Merchant Results


*Calculated from FunnelKit Analytics tracking confirmed post-purchase order completions attributed to upsell acceptance, not self-reported, not estimated from averages.

For a store with 500 monthly orders: 155 acceptances x $14.50 average = $2,248/month. $26,970/year.


Customer Behaviour Data


Aggregated across 2,847 merchant A/B tests, these patterns are consistent enough to inform default settings for new merchants.

What we testedWinnerMargin
Complementary vs. similar productsComplementary34% vs 22%
Bundled vs. individual itemsBundled37% vs 28%
Discounted vs. full-priceDiscounted33% vs 26%
Image offers vs. text-onlyImage31% vs 18%
3-second delay vs. immediate3 seconds31% vs 24%

The timing finding from pre-launch research held at scale across thousands of real-world tests.


Where I Was Wrong


International merchants were invisible to me. 23% of installations came from outside US/UK/Australia. USD hardcoded in templates. "Complete Your Order" translates awkwardly in several languages. Two tiers of experience: the merchants I'd imagined, and everyone else. (Multi-language and gateway coverage in V2, April 2025.)

Time-based scheduling was core, not edge case. 34% of post-launch merchants requested seasonal offer scheduling. Promotions are how merchants think about their business. I'd categorised something central as niche. (Added V1.3, January 2025.)

I measured conversion, not long-term trust. My hypothesis for the V2 cohort analysis: 1-2 upsell acceptances per customer correlates with higher LTV (engaged customers buy more). 4+ acceptances may correlate with a measurable churn increase, the "I feel marketed to" effect. If that's what the data shows, the design implication is clear: a per-customer frequency cap. Not "show every eligible offer to every eligible customer," but "protect the merchant's brand relationship, not just their short-term revenue."


What's Next


  • AI offer selection, ML model for highest-probability upsell per customer (Q2 2025)
  • Cohort LTV analysis, upsell-exposed vs. unexposed, informing frequency cap design
  • Subscription upsells, different psychology, needs its own design treatment

22,847

Active installations (6 months)

31%

Average upsell acceptance

$47M

Total incremental revenue

4 min

Setup time (vs. 45+ competitors)

4.8 / 5

Rating, 356 reviews