Product Design for B2B SaaS

Jake McMahon
Led by Jake McMahon
Founder, ProductQuant · LinkedIn

Find exactly where users drop. Before we touch a pixel.

We pull your PostHog data before we open Figma. Every dropout point quantified, every fix prioritised by revenue impact, every sprint decision tied to a number in your analytics.

If the agreed conversion metric doesn’t improve by 20% after shipping and A/B testing — we redesign the failing flow at no charge.

Three ways to engage

Audit + Plan 2 weeks
One-time · from $8K
Design Sprint 4–6 weeks
Fixed price · $20K–$40K
Design OS Monthly retainer
$12K–$18K/mo

All sprints: Figma files · developer handoff · A/B test plan
Sprint + Design OS: 20% lift guarantee or iteration free

Analytics-first: PostHog · Amplitude · Mixpanel · Hotjar · Every friction point quantified in revenue terms · Target metric agreed in writing before sprint starts · Developer handoff included every tier

What’s actually costing you right now

Design decisions based on opinion, not data. Activation hasn’t moved in months and nobody can point to which screens are bleeding it.

You’ve shipped redesigns that felt right. Without friction data tied to a specific metric, you’re optimising by intuition — and the activation ceiling stays where it is.

You know the product has UX problems. But there’s no quantified list of what to fix first, ranked by what each fix is actually worth.

Heatmaps give you impressions. Session recordings give you examples. Neither gives you a prioritised revenue impact list. That’s what the audit produces.

When the board asks what design contributed to activation and retention this quarter, the answer is a story — not a number.

Every sprint includes a written target metric, a post-ship review call, and A/B data. The board gets a number. Not “we redesigned the onboarding and it feels better.”

Three tiers. One continuous method.

Audit → Sprint → Design OS

Each tier builds on the last. The Audit is the recommended entry point — every sprint and retainer month starts from friction data, not from opinion.

Tier 1 · Diagnosis

Audit + Plan

Two weeks. Analytics-led friction diagnosis before a single design decision is made.

$8K – $15K

One-time · Fixed price

  • Full-funnel dropout map — every step, percentage attached
  • Friction inventory — prioritised list with revenue impact per item
  • Heatmap + session review — top 10 highest-traffic screens
  • Competitor reference audit — three to five direct competitors, flows only
  • Sprint brief — included if you proceed; not an extra cost
  • 60-minute readout call — recording + written summary

Guarantee does not apply to the Audit tier — the audit diagnoses; it does not redesign.

Start with an Audit →

Tier 3 · Ongoing

Design OS

Monthly retainer. Continuous design motion against a live friction inventory.

$12K – $18K

Per month · 3-month minimum

  • One sprint per month — next-highest-impact item from the inventory
  • Friction inventory update — new dropout data reviewed monthly
  • A/B results log — outcomes from shipped flows recorded and linked
  • Design system maintenance — new components added each sprint month
  • 30-minute monthly check-in — async substitute available
  • Context maintained — no re-briefing; every sprint starts from the last data point

Sprint guarantee applies per sprint within the retainer, on written agreement.

Discuss Design OS →

The guarantee

20% lift or we redesign at no charge.

This is in the service contract — not in the marketing copy.

How it works

Every sprint has a written, agreed target metric before work begins.

If the redesigned flow does not produce a 20% or greater relative improvement in that metric — as measured by an A/B test after shipping — we deliver one iteration sprint at no charge. The metric is chosen before the sprint starts, not retrospectively.

Example: baseline onboarding step 3 completion 40% → target 48% (40% × 1.20). If the result is 46%, the guarantee triggers. The iteration sprint is scheduled within 30 days of the post-ship review call.

What the guarantee covers

The specific flow scoped in the sprint brief. The metric agreed in writing before sprint start. One iteration sprint at no charge if the metric does not reach the 20% threshold.

What the guarantee does not cover

Metrics outside the agreed sprint scope. The Audit tier. Cases where the client does not ship the redesign or does not run an A/B test within 90 days of delivery.

What this guarantee is NOT

It is not a money-back guarantee. It is a redo commitment — one iteration sprint delivered at no charge, tied to the failing flow. A refund is not the mechanic.

Why 20% relative, not absolute

20% relative improvement (e.g., 40% → 48%) is operationalisable, verifiable by A/B test, and meaningful at every baseline level. It is the contract threshold — not a marketing number.

Honest comparison

vs. Hiring a senior designer · vs. Brand agency · vs. DIY

All three comparisons a buyer runs before signing. Addressed directly — not framed as an attack.

Criterion ProductQuant Senior in-house hire Brand / marketing agency
Time to first output 2 weeks (Audit). Sprint starts within 2 weeks of signed contract. 60–120 days to recruit, onboard, and reach productive output 3–6 months to project delivery
Cost structure Fixed, bounded investment per tier. No equity, no benefits overhead. $120K–$180K/yr total comp before equity. Open-ended. Variable by project. Often scoped broadly at $80K+
Primary output Analytics-led friction audit + redesigned conversion flows, developer-ready General product design across whatever backlog the team assigns Brand identity, marketing sites, campaign assets
Success measured by Specific conversion metric agreed before sprint starts — activation rate, step completion, etc. Shipping velocity and stakeholder satisfaction Client approval and aesthetic outcome
Metric guarantee Yes — 20% relative improvement or failing flow redesigned free No contractual metric guarantee No conversion outcome guarantee
What you inherit Figma files, annotations, developer handoff, A/B test plan, friction inventory One person's design decisions, undocumented unless they choose to document Brand guide, visual assets — rarely conversion-focused
When it makes sense Need a specific flow improved with a measurable outcome. Series A/B. CFO wants a bounded investment. Need design in daily standups. Post-PMF, high design volume. Design is core function. Need a brand identity system, campaign materials, or marketing site — not product UX

These paths are not mutually exclusive. An Audit + Sprint can define exactly what your first in-house designer inherits — so their first week is productive rather than spent reverse-engineering what exists.

The method

D.R.I.V.E. From friction data to shipped redesign.

Every sprint runs this sequence. Each phase has defined deliverables. Nothing is a black box.

D

Diagnose

Read-only analytics access. Full user journey mapped: where users enter, where they drop, what they do in between. Every dropout point quantified as a percentage. Revenue impact calculated at confirmed MRR per user.

  • Full-funnel dropout map (signup to activation)
  • Heatmap and session review — top 10 highest-traffic screens
  • Competitor reference audit — three to five products, flows only
  • Friction inventory with revenue impact per item
R

Research

User session deep-dives, behavioural segmentation, and pattern analysis. Three to five session reviews on the target flow. Runs in parallel with Diagnose during the Audit tier.

  • Session recordings reviewed for the target flow
  • Behavioural pattern identified from session data
  • Cohort segmentation where analytics permit
I

Iterate

Two to three design directions explored for the target flow. One direction selected with documented rationale tied to friction data — not designer preference. Full Figma delivery: all screens, all states, all edge cases.

  • Design exploration: two to three directions
  • Direction selection with documented rationale
  • High-fidelity Figma files — every screen, every state
V

Validate

A/B test plan written as part of sprint close — not as an afterthought. Tool recommendation (PostHog, Optimizely, VWO, LaunchDarkly), traffic split, minimum sample size for 95% confidence, and what to measure.

  • A/B test plan — tool, split, sample size, metric
  • Post-ship review call — four to six weeks after client ships
  • Guarantee assessment at post-ship review
E

Embed

Developer handoff package: Figma developer mode, written notes for any interaction that cannot be expressed in Figma, component annotations for spacing, states, and colour. Design rationale documented so the next person who touches the flow has the decision history.

  • Component annotations — spacing, states, typography, colour
  • Written handoff notes for complex interactions
  • Design rationale documentation
  • Design system additions — new components and states added to the library

Is this the right fit?

This is a specific offer for a specific situation.

It works well for a narrow ICP. If you don’t fit it, say so before scoping — not after.

Good fit

  • B2B SaaS at Series A or Series B — product-market fit found, activation or retention not where it should be
  • Director of Product, VP Product, or CEO who needs design ROI in metrics, not aesthetics
  • CFO declined a senior designer headcount — bounded investment preferred
  • You have analytics instrumented (PostHog, Amplitude, Mixpanel). Without data, the audit has nothing to map.
  • Head of Design with accumulated design debt — needs a system the team can own
  • Head of Growth who needs a specific flow redesigned and A/B tested in under 6 weeks

Not a fit

  • Pre-product-market fit, pre-revenue — no analytics data means no friction audit, and no guarantee
  • Brand identity, marketing design, social creative, or campaign materials — scope is product UX only
  • You need a designer in daily standups and Slack all day — engagement model doesn’t cover that
  • Engineering implementation included — we deliver design and handoff; your team implements
  • You need a full brand and visual identity system from scratch — a narrow use case, negotiate separately

Frequently asked

Questions

PostHog, Amplitude, Mixpanel, Hotjar, FullStory. Read-only access only — we never request write permissions or admin credentials. If analytics are not instrumented, the audit proceeds on public signals and session heuristics, clearly marked as inference-based, at a reduced scope and rate.
Strongly recommended. A sprint scoped without friction data cannot be guaranteed — the guarantee requires a baseline dropout rate agreed upfront. If you already have existing analytics reports showing dropout rates per screen, we’ll review them before scoping. If they’re of equivalent quality to the Audit output, we can proceed directly to a sprint.
Figma file with developer mode enabled, component annotations (spacing, interaction states, typography, colour specs), written handoff notes for any interaction that cannot be expressed in Figma alone (animation timing, API-dependent states), and an A/B test plan specifying the tool, traffic split, minimum sample size, and metric. Engineers should not need to ask any questions about what was designed.
The specific flow scoped in the sprint brief. The metric agreed in writing before sprint start. One iteration sprint at no charge if the metric does not improve by 20% relative in your A/B test. The guarantee does not cover: metrics outside the agreed scope, the Audit tier, cases where the redesign is not shipped within 90 days, or cases where no A/B test is run. This is in the service contract — not in the marketing copy.
You do. We design the test — tool recommendation, traffic split, minimum sample size for 95% confidence, and what to measure. Your engineering team runs it using whatever A/B testing capability is in place (PostHog, LaunchDarkly, Optimizely, VWO, or a feature flag in codebase). Setting up the A/B testing tool is not in scope.
Month-to-month after the 3-month minimum, with 30 days written notice to cancel. One pause per year, up to 30 days, without penalty — 10 business days advance notice required. If a ProductQuant-side delay affects your shipping timeline, the delay is credited against the following month’s retainer.
We operate on a read-only basis with your analytics data. We do not collect, store, or process end-user personal data — only aggregated metrics and session heuristics. SOC 2 or GDPR compliance is not claimed as a selling point here. If your procurement process requires a specific compliance framework, raise it before scoping and we’ll address it directly.

Find the three design changes that will move activation. Before you commission anything.

Start with the Audit + Plan. Two weeks. A prioritised friction inventory ranked by revenue impact. If you proceed to a sprint, the audit becomes your sprint brief at no extra cost.