Generate Key Results From a Given Objective
Produces 3-5 candidate key results for a qualitative objective, ranked by measurability and stretch. Built for PMs who have a clear objective but struggle to pick the right KRs.
When to use this prompt
Use this when you have a clear qualitative objective but are stuck on which key results to commit. You will need the objective text and any relevant baseline data. The prompt generates candidate KRs across 4 metric types (usage, outcome, quality, inputs) rather than letting you pick from a single category, which prevents the common failure of committing only to vanity metrics. It is a brainstorming tool, not a decision tool; you still need to pick which KRs to commit. Best used in pairs: generate candidates, then workshop the choice with your team.
The Prompt
You are a product manager generating candidate key results for a given objective. Use a balanced scorecard approach: cover usage, outcome, quality, and input metrics. Objective: {{objective}} Business context: {{business_context}} Available baselines: {{baselines}} Time horizon: {{horizon}} Generate 3-5 candidate KRs in each of these 4 categories: 1. USAGE METRICS â How many people use the thing, how often, how intensely. Examples: weekly active users, sessions per user, adoption percentage. 2. OUTCOME METRICS â What changes in the user's life or business because of the feature. Examples: conversion rate, churn rate, revenue per user, task completion time. 3. QUALITY METRICS â How well the thing works. Examples: error rate, satisfaction score, support ticket volume, SLO attainment. 4. INPUT METRICS â Leading indicators that predict success. Examples: content created, invites sent, model trained, integrations connected. For each candidate KR, produce: - Metric name - Baseline (from input or marked TBD) - Suggested target - Why this metric matters for the objective - Measurability score: EASY (already instrumented), MEDIUM (needs new event), HARD (needs new data pipeline) - Stretch score: BAU (obvious), STRETCH (60-70 percent probability), MOONSHOT (under 30 percent probability) After the candidates, produce: RECOMMENDED COMMIT SET â Pick the 3 KRs from the candidates that best cover the objective. Aim for a mix of outcome and leading indicators. Justify each pick in 1 sentence. COUNTER-METRICS TO WATCH â 2 metrics that should NOT get worse. These are not commits but guardrails. REJECTED CANDIDATES â KRs that looked promising but were not picked, with a 1-line reason. Do not invent baselines. If one is missing, say TBD and note that baseline collection is the first week's work.
Example Output
Objective: Make the invoices dashboard the trusted source of truth for overdue status. USAGE CANDIDATES - Weekly active billing admins using new sort (baseline 0, target 250, measurability EASY, stretch STRETCH) - Clicks on overdue row per session (baseline TBD, target 4, measurability MEDIUM, stretch BAU) OUTCOME CANDIDATES - Median days-to-collection (baseline 18 days, target 12 days, measurability EASY, stretch STRETCH) - NPS for billing admins (baseline 32, target 50, measurability EASY, stretch STRETCH) - Spreadsheet-export events per week (baseline 1200, target 400, measurability EASY, stretch STRETCH) QUALITY CANDIDATES - Dashboard p99 load time (baseline 4.2s, target 1s, measurability EASY, stretch STRETCH) - Support tickets mentioning "wrong overdue" (baseline 15/month, target 3/month, measurability MEDIUM, stretch STRETCH) INPUT CANDIDATES - Percent of admins completing new tour (baseline 0, target 70 percent, measurability EASY, stretch BAU) - Reminder draft creations per day (baseline 0, target 100, measurability MEDIUM, stretch STRETCH) RECOMMENDED COMMIT SET 1. Median days-to-collection 18 to 12 â the true outcome metric. 2. Spreadsheet-export events 1200 to 400 â behavioral proof of trust shift. 3. NPS 32 to 50 â qualitative validation. COUNTER-METRICS: Dashboard p99 load time (do not regress), support ticket volume (do not increase). REJECTED: Tour completion rate (input metric, too far from outcome); WAU on new sort (usage metric, adopts but may not change behavior).
Recommended Tools
Amplitude and Mixpanel generate the baseline data you need to commit to real KR targets, and their dashboards can track KR progress automatically. Notion projects is where the OKRs live once committed, with links to the analytics charts. Use Amplitude or Mixpanel to source data and Notion to host the commitments and weekly review rituals.
Frequently Asked Questions
When should I use this prompt?
Use it when you have a qualitative objective but you are staring at a blank space where the KRs should go. Also use it when a team keeps proposing the same narrow type of KR (usually usage metrics) and needs to be pushed toward outcome metrics. It is less useful when you already have strong KR intuition for the objective; in that case, draft the KRs yourself and run prompt 29 (quality check) instead. This prompt is for brainstorming; the quality check prompt is for validation.
How do I choose between candidates?
Pick the KR closest to real business outcome first; that is your anchor. Then add 1-2 leading indicators that will move before the outcome KR so you know directionally whether the work is on track. Avoid committing to 3 KRs that are all the same type; if you commit to 3 usage metrics, you will ship features that drive usage without outcome. A balanced commit set includes 1 outcome KR, 1 usage or input KR, and 1 quality KR. Counter-metrics are never commits; they are guardrails you watch to make sure the commits are not ruining something else.