📝 User Story Writing

Generate Edge Cases and Error States

Surfaces edge cases, boundary conditions, and error states a user story should handle. Built for PMs who want to harden thin stories before engineering estimation.

This prompt generates a structured edge case list for any user story using a 5-dimension taxonomy: boundary values, invalid inputs, concurrent access, failure modes, and permission variations. Each case includes a suggested expected behavior.

When to use this prompt

Use this when a user story is written but feels thin, or when an engineer asks 'what about X?' and you realize you haven't thought through the failure modes. You will need the user story and the happy-path acceptance criteria. The prompt works best on stories that touch state (write operations, multi-step flows) rather than pure read operations. It will not replace a good QA engineer, but it will surface the common gotchas that cause bugs to leak into production when a story ships without stress-testing its assumptions.

The Prompt

Role: Product Manager Variables: {{user_story}}, {{happy_path}}, {{system_context}}
You are a senior QA engineer and product manager hybrid, tasked with surfacing edge cases and error states for a user story before it enters engineering estimation. Use a 5-dimension taxonomy: boundary values, invalid inputs, concurrent access, failure modes, and permission variations.

User story:
{{user_story}}

Happy-path criteria:
{{happy_path}}

System context: {{system_context}}

For each of the 5 dimensions, generate 2-4 edge cases. For each case, produce:
- CASE NAME (short)
- TRIGGER: the exact condition that activates the case
- EXPECTED BEHAVIOR: what the system should do
- SUGGESTED TEST: one line describing how QA would verify it

Dimensions:

1. BOUNDARY VALUES — Minimum, maximum, zero, negative, unicode, very long strings.

2. INVALID INPUTS — Malformed data, missing fields, wrong types, injection attempts.

3. CONCURRENT ACCESS — Two users editing the same resource, race conditions, duplicate submissions.

4. FAILURE MODES — Network failures, database timeouts, third-party API unavailable, partial writes.

5. PERMISSION VARIATIONS — Anonymous user, wrong role, expired session, cross-tenant access attempts.

End with a "RISK PRIORITY" list: rank all edge cases 1-N by the likelihood of production impact if missed. Be direct: if 3 cases matter and the rest are theater, say so.

Example Output

1. BOUNDARY VALUES
- Empty comment submit: triggers when body is 0 chars. Expected: blocked with "Comment cannot be empty." Test: submit empty form, assert error.
- 10,000 char comment: triggers at boundary. Expected: accepted, rendered with scroll. Test: paste 10k chars, verify storage and display.
- Unicode emoji in body: Expected: stored and rendered correctly on all clients.

2. INVALID INPUTS
- HTML injection: triggers when body contains script tags. Expected: escaped on render, not executed.
- Null user ID: Expected: 400 with "Authentication required" error.

3. CONCURRENT ACCESS
- Two users editing same comment: Expected: last-write-wins with conflict warning banner on second writer.
- Duplicate submit from double-click: Expected: second submission is a no-op with idempotency key.

4. FAILURE MODES
- Database timeout during save: Expected: retry once, then show "Save failed, please retry" without losing draft.
- Moderation API unreachable: Expected: fail open or closed per policy — confirm with trust and safety team.

5. PERMISSION VARIATIONS
- Suspended user attempts to comment: Expected: 403 with "Your account is under review."
- Cross-tenant access attempt: Expected: 404 to avoid leaking existence of the resource.

RISK PRIORITY
1. HTML injection (security, always ship fix)
2. Database timeout handling (common, data loss risk)
3. Concurrent edit conflict (rare but trust-damaging)
...the rest are low likelihood.

Frequently Asked Questions

When should I use this prompt?

Use it on stories that touch state, handle user input, or integrate with third-party services. It is wasted on pure read-only stories with no failure modes. Run it before engineering estimation because edge cases materially affect story points, and surfacing them late creates rework. If your team has a dedicated QA engineer who already does edge-case analysis, skip it for their stories but use it for stories where QA is absent.

How do I avoid over-engineering from edge case lists?

The RISK PRIORITY list at the end is the protection. Commit to handling only the top 3-5 edge cases in the first version and explicitly defer the rest. Document the deferred cases in a "known limitations" section of the story so nothing is forgotten. Over-engineering happens when teams treat every edge case as must-have; use the priority list to say no out loud. A story that handles 5 edge cases well is better than one that half-handles 15.