📄 PRD & Spec Drafting

Generate Success Metrics and KPI Tree

Produces a KPI tree linking a high-level goal to leading, lagging, and counter-metrics for a new feature. Built for PMs who need measurable success criteria in their PRDs.

This prompt builds a KPI tree starting from a top-level goal and decomposing into primary metric, leading indicators, lagging indicators, and counter-metrics. It follows the AARRR and North Star Metric frameworks.

When to use this prompt

Use this when your PRD has a clear feature direction but fuzzy success criteria like 'improve engagement.' The prompt forces a disciplined metric decomposition. You will need the top-level goal, the feature concept, and any existing instrumentation. It works best when you provide your current baseline metrics so the output can suggest realistic targets. Without baselines, it will produce generic placeholder targets that you will need to replace. This is valuable at the PRD-review stage when executives push back on vague outcome language.

The Prompt

Role: Product Manager Variables: {{top_level_goal}}, {{feature_concept}}, {{baselines}}, {{horizon}}
You are a product manager building a KPI tree for a new feature. The tree must connect a top-level business goal to specific metrics the team can track in analytics, with clear leading and lagging indicators plus counter-metrics.

Top-level goal: {{top_level_goal}}
Feature concept: {{feature_concept}}
Current baseline metrics: {{baselines}}
Time horizon: {{horizon}}

Produce the KPI tree with these levels:

1. NORTH STAR — The single metric that best represents success for this feature. State the metric, the current baseline, the target, and the time horizon.

2. PRIMARY METRICS (2-3) — Metrics directly influenced by the feature. These are what the team will monitor weekly. Each with current value, target value, and why it matters.

3. LEADING INDICATORS (3-5) — Early signals that will move before the primary metrics. These are what the team watches daily to know if the feature is working. Each with definition and expected direction.

4. LAGGING INDICATORS (2-3) — Business outcomes that will confirm the feature worked but will take weeks or months. Each with definition and expected delay.

5. COUNTER-METRICS (2-3) — Metrics that could get worse and that would signal unintended harm. Each with definition and alerting threshold.

6. INSTRUMENTATION NOTES — Which metrics are already tracked, which need new events, and any analytics work required before launch.

Be realistic about targets. If current baselines are not provided, mark targets as TBD and flag this as the single highest priority to resolve before sprint commit.

Example Output

NORTH STAR: Median days-to-collection on overdue invoices. Baseline: 18 days. Target: 12 days within 90 days of launch.

PRIMARY METRICS
- Percent of billing admins using overdue-first sort (baseline 0, target 70 percent at 30 days).
- Click-through rate on overdue invoice rows (baseline TBD, target 25 percent).

LEADING INDICATORS
- Daily active billing admins viewing invoices page (expected up 5 percent).
- Average time on invoices page per session (expected down from 8 to 5 minutes as sort reduces manual scanning).
- Reminder email drafts created from overdue rows (new event, target 100 per day by week 4).
- In-app tour completion rate (target 80 percent, early signal for adoption).

LAGGING INDICATORS
- Cash collection cycle time (expected improvement visible in month 2).
- Customer NPS for billing admins (expected up 5 points by quarter end).

COUNTER-METRICS
- Support tickets mentioning "invoices dashboard" (alert if up 15 percent week over week; signals confusion with new sort).
- Bounce rate on invoices page (alert if up 10 percent; signals users leaving because of new sort).

INSTRUMENTATION NOTES
- Existing: daily active admins, time on page, ticket mentions.
- New events needed: overdue sort toggle, overdue row click, reminder draft creation.
- Blocked on analytics PR before sprint 14.

Frequently Asked Questions

When should I use this prompt?

Run it when drafting the success metrics section of a PRD and whenever leadership asks 'how will we know if this worked?' and you have to invent an answer on the spot. Use it before a launch readiness review so counter-metrics are identified early enough to instrument. It is less useful for purely cosmetic changes where no one expects measurable outcomes; for those, just note 'qualitative feedback only' and move on. Save the KPI tree discipline for features expected to move numbers.

What if my baseline data is incomplete?

The prompt will mark targets as TBD when baselines are missing and surface instrumentation work as the top priority. Use that as a forcing function: before the feature can be committed to a sprint, the analytics team must either confirm baselines from existing data or add the instrumentation needed to capture them. Shipping features without baselines means you cannot know if they worked, which defeats the purpose of the KPI tree. Treat missing baselines as a blocker, not a nice-to-have.