Product Design for B2B SaaS
We pull your PostHog data before we open Figma. Every dropout point quantified, every fix prioritised by revenue impact, every sprint decision tied to a number in your analytics.
If the agreed conversion metric doesn’t improve by 20% after shipping and A/B testing — we redesign the failing flow at no charge.
Three ways to engage
All sprints: Figma files · developer handoff · A/B test plan
Sprint + Design OS: 20% lift guarantee or iteration free
What’s actually costing you right now
Design decisions based on opinion, not data. Activation hasn’t moved in months and nobody can point to which screens are bleeding it.
You’ve shipped redesigns that felt right. Without friction data tied to a specific metric, you’re optimising by intuition — and the activation ceiling stays where it is.
You know the product has UX problems. But there’s no quantified list of what to fix first, ranked by what each fix is actually worth.
Heatmaps give you impressions. Session recordings give you examples. Neither gives you a prioritised revenue impact list. That’s what the audit produces.
When the board asks what design contributed to activation and retention this quarter, the answer is a story — not a number.
Every sprint includes a written target metric, a post-ship review call, and A/B data. The board gets a number. Not “we redesigned the onboarding and it feels better.”
Three tiers. One continuous method.
Each tier builds on the last. The Audit is the recommended entry point — every sprint and retainer month starts from friction data, not from opinion.
Tier 1 · Diagnosis
Two weeks. Analytics-led friction diagnosis before a single design decision is made.
$8K – $15K
One-time · Fixed price
Guarantee does not apply to the Audit tier — the audit diagnoses; it does not redesign.
Start with an Audit →Tier 2 · Redesign
Four to six weeks. Focused redesign of one high-impact flow, tied to a specific metric.
$20K – $40K
Fixed price per sprint
20% relative improvement or one iteration sprint at no charge. In writing.
Scope a Sprint →Tier 3 · Ongoing
Monthly retainer. Continuous design motion against a live friction inventory.
$12K – $18K
Per month · 3-month minimum
Sprint guarantee applies per sprint within the retainer, on written agreement.
Discuss Design OS →The guarantee
This is in the service contract — not in the marketing copy.
How it works
If the redesigned flow does not produce a 20% or greater relative improvement in that metric — as measured by an A/B test after shipping — we deliver one iteration sprint at no charge. The metric is chosen before the sprint starts, not retrospectively.
Example: baseline onboarding step 3 completion 40% → target 48% (40% × 1.20). If the result is 46%, the guarantee triggers. The iteration sprint is scheduled within 30 days of the post-ship review call.
The specific flow scoped in the sprint brief. The metric agreed in writing before sprint start. One iteration sprint at no charge if the metric does not reach the 20% threshold.
Metrics outside the agreed sprint scope. The Audit tier. Cases where the client does not ship the redesign or does not run an A/B test within 90 days of delivery.
It is not a money-back guarantee. It is a redo commitment — one iteration sprint delivered at no charge, tied to the failing flow. A refund is not the mechanic.
20% relative improvement (e.g., 40% → 48%) is operationalisable, verifiable by A/B test, and meaningful at every baseline level. It is the contract threshold — not a marketing number.
Honest comparison
All three comparisons a buyer runs before signing. Addressed directly — not framed as an attack.
| Criterion | ProductQuant | Senior in-house hire | Brand / marketing agency |
|---|---|---|---|
| Time to first output | 2 weeks (Audit). Sprint starts within 2 weeks of signed contract. | 60–120 days to recruit, onboard, and reach productive output | 3–6 months to project delivery |
| Cost structure | Fixed, bounded investment per tier. No equity, no benefits overhead. | $120K–$180K/yr total comp before equity. Open-ended. | Variable by project. Often scoped broadly at $80K+ |
| Primary output | Analytics-led friction audit + redesigned conversion flows, developer-ready | General product design across whatever backlog the team assigns | Brand identity, marketing sites, campaign assets |
| Success measured by | Specific conversion metric agreed before sprint starts — activation rate, step completion, etc. | Shipping velocity and stakeholder satisfaction | Client approval and aesthetic outcome |
| Metric guarantee | Yes — 20% relative improvement or failing flow redesigned free | No contractual metric guarantee | No conversion outcome guarantee |
| What you inherit | Figma files, annotations, developer handoff, A/B test plan, friction inventory | One person's design decisions, undocumented unless they choose to document | Brand guide, visual assets — rarely conversion-focused |
| When it makes sense | Need a specific flow improved with a measurable outcome. Series A/B. CFO wants a bounded investment. | Need design in daily standups. Post-PMF, high design volume. Design is core function. | Need a brand identity system, campaign materials, or marketing site — not product UX |
These paths are not mutually exclusive. An Audit + Sprint can define exactly what your first in-house designer inherits — so their first week is productive rather than spent reverse-engineering what exists.
The method
Every sprint runs this sequence. Each phase has defined deliverables. Nothing is a black box.
Diagnose
Read-only analytics access. Full user journey mapped: where users enter, where they drop, what they do in between. Every dropout point quantified as a percentage. Revenue impact calculated at confirmed MRR per user.
Research
User session deep-dives, behavioural segmentation, and pattern analysis. Three to five session reviews on the target flow. Runs in parallel with Diagnose during the Audit tier.
Iterate
Two to three design directions explored for the target flow. One direction selected with documented rationale tied to friction data — not designer preference. Full Figma delivery: all screens, all states, all edge cases.
Validate
A/B test plan written as part of sprint close — not as an afterthought. Tool recommendation (PostHog, Optimizely, VWO, LaunchDarkly), traffic split, minimum sample size for 95% confidence, and what to measure.
Embed
Developer handoff package: Figma developer mode, written notes for any interaction that cannot be expressed in Figma, component annotations for spacing, states, and colour. Design rationale documented so the next person who touches the flow has the decision history.
Is this the right fit?
It works well for a narrow ICP. If you don’t fit it, say so before scoping — not after.
Frequently asked
Start with the Audit + Plan. Two weeks. A prioritised friction inventory ranked by revenue impact. If you proceed to a sprint, the audit becomes your sprint brief at no extra cost.