I Cut My PRD Time from 6 Hours to 45 Minutes. Here's Exactly How.
Six hours.
That's how long I spent writing a PRD for a payment reconciliation feature last quarter. Six hours of context-switching between Google Docs, Notion, Slack threads, and that one crucial piece of user research I knew was somewhere in my Dropbox.
I hit send on the PRD feeling exhausted but accomplished.
Three days later, it came back with 14 comments.
My QA lead found four edge cases I'd completely missed. My VP wanted a different metrics framework. Engineering flagged a technical constraint that invalidated an entire approach.
Another two days of revisions. Another review cycle. By the time we had a document the team could actually build from, we'd burned two weeks of calendar time.
Here's the kicker: This wasn't a bad PRD. It was a normal PRD, written the normal way.
And that's exactly the problem.
The Breakthrough: 45 Minutes for Production-Ready PRDs
I now write PRDs in about 45 minutes that go through review with one round of minor edits.
The documents are more thorough. The edge cases are covered. The metrics are tighter.
This isn't about cutting corners. It's about fundamentally changing when the thinking happens.
Let me show you exactly how I do it — and how you can start today.
Why "ChatGPT, Write Me a PRD" Doesn't Work
Before we dive in, let's address the elephant in the room.
You've probably tried asking ChatGPT or Claude to "write a PRD for [your feature]."
The result? A generic, surface-level document that looks professional but has zero depth. No engineering team wants to build from it. No stakeholder trusts it. You end up rewriting 80% of it anyway.
That's not what this is.
The method I use is what I call the AI-Partnership Approach:
Think first. Generate second.
I spend most of my time in exploration and questioning. The AI handles information synthesis and document structure. I handle judgment, prioritization, and domain expertise.
Here's what changes:
Traditional Approach (4-8 hours):
- Research and context gathering: 1-2 hours
- Writing first draft: 2-3 hours
- Self-review and revision: 1-2 hours
- Edge case discovery during review
- Additional revision cycles: 1-3 hours
AI-Partnership Approach (45-60 minutes):
- Context loading: 2 minutes
- Socratic exploration: 15 minutes
- Approach generation: 10 minutes
- PRD generation: 5 minutes
- Multi-perspective review: 10 minutes
- Validation and polish: 3 minutes
Same word count. Same quality. The difference? You discover gaps BEFORE writing, not after.
The Four Techniques That Make This Work
This isn't one magic trick. It's a system built on four core techniques.
1. Full Context Loading (The Foundation)
Before you ask AI for anything, you give it the complete picture.
In Claude Code, this looks like:
@product-strategy-2026.md @user-research-notifications.md @tech-architecture.md
I need to write a PRD for an in-app notifications center.
Those three @-mentions load the full contents of those files as context. The AI now understands:
- Your strategic priorities
- What users actually said in interviews
- What your technical architecture can support
This is the step that makes everything downstream better.
Without full context, every AI-generated document is a guess. With full context, it's an informed draft.
Pro tip: I keep a context/ folder in every project directory with the 5-8 documents that define my product area. Loading context takes me about two minutes.
2. Socratic Questioning (The Game-Changer)
This is the technique most PMs skip.
And it's the one that saves the most revision cycles later.
Instead of jumping to "write the PRD," you ask the AI to question you first.
It asks hard, pointed questions across five categories:
Problem Clarity
"Who exactly experiences this problem? How do you know? What data supports this is worth solving?"
Solution Validation
"What alternatives did you consider? Why is this better than the next best option? What would you cut if you had half the timeline?"
Success Criteria
"What metric tells you this shipped successfully? What's the baseline? What improvement makes this worth the engineering investment?"
Constraints
"What technical limitations exist? Regulatory requirements? Dependencies for other teams?"
Strategic Fit
"How does this ladder up to company priorities? What are you choosing NOT to build by building this?"
When I built the PRD for our notifications center, the Socratic phase surfaced three critical insights I'd missed:
- Regulatory requirement around notification consent for certain transaction types
- Push notification infrastructure rate limit that would affect our rollout plan
- 40% of users had notifications disabled at the OS level — which completely changed our channel strategy
Each of those would have been a review comment. A revision cycle. A delay.
Instead, they became inputs to the first draft.
3. Multiple Approaches (The Clarity Shortcut)
Before generating the PRD, I ask the AI to propose three different approaches to the problem.
Not just "option A vs. option B" — but genuinely different strategies with different tradeoffs.
For the notifications center, the three approaches were:
Approach A: Full in-app notification center
- Custom UI, full notification history, granular preferences
- High effort (3 months), high value, serves all user segments
Approach B: Progressive enhancement
- Start with transaction notifications only, add channels over time
- Medium effort (6 weeks), fast to market, validates demand first
Approach C: API-first
- Build notification infrastructure as internal API, let teams send through it
- Medium effort, high reusability, slower visible user value
We went with Approach B.
But having three options side by side — with estimated timelines, resource needs, and risk profiles — made the trade-off conversation with my engineering lead and VP a 10-minute discussion instead of a 45-minute debate.
The context was already structured. The decision became obvious.
4. Multi-Perspective Review (The Safety Net)
After the PRD is drafted, I run it through four simulated perspectives before any human reviewer sees it:
Engineering Lead Flags technical feasibility issues, missing API specs, unclear data models, unrealistic performance requirements.
VP Product Challenges strategic alignment, questions prioritization, pushes on impact sizing, asks if this is the highest-value use of the team's time.
UX Researcher Identifies assumptions about user behavior that lack evidence, flags accessibility concerns, questions if the solution matches observed user mental models.
QA Lead Surfaces edge cases, error states, boundary conditions, regression risks. Asks what happens when the network drops mid-transaction, when a user has zero notification history, or when two notifications arrive simultaneously.
The QA perspective alone typically finds 5-8 edge cases that would have survived into the review cycle.
Each edge case caught before review saves a comment-response-revision loop that adds half a day to the process.
Real Example: The In-App Notifications Center PRD
Let me walk you through the actual sequence, step by step.
Step 1: Context Loading (2 minutes)
I opened Claude Code in my project directory:
@context/product-strategy-q1.md @context/user-research-engagement.md @context/technical-architecture.md
I need to write a PRD for an in-app notifications center for Careem Pay.
Let's start with the exploration phase.
Step 2: Socratic Exploration (15 minutes)
Claude Code asked me 15 questions. Here are the ones that shaped the document:
"Your user research mentions low engagement with email notifications. What's the current open rate, and do you have data on whether users prefer in-app vs. push vs. email?"
"Your strategy doc prioritizes transaction trust. How does a notifications center contribute to trust specifically, beyond general engagement?"
"What consent framework applies to financial notifications in your operating markets? Are there markets where opt-in is required vs. opt-out?"
Some answers were precise: "Email open rate is 12%, push is 34%."
Some were honest admissions: "I don't have data on in-app notification preferences — that's an assumption we need to validate."
Those honest answers became "Open Questions" in the final PRD rather than unstated assumptions. My VP appreciated that transparency.
Step 3: Approach Generation (10 minutes)
Claude Code proposed three approaches with different scope, timeline, and risk profiles.
I read through them, asked follow-up questions about the progressive enhancement approach, and confirmed that was the direction.
Step 4: PRD Generation (5 minutes)
Claude Code generated the full PRD, written directly to a file in my project directory.
The document included:
- Problem statement
- User stories
- Success metrics (with baselines and targets)
- Technical requirements
- Rollout plan
- Edge cases
- Open questions
I didn't dictate the structure. The context, the Socratic answers, and the chosen approach gave it enough information to produce a coherent first draft.
Step 5: Multi-Perspective Review (10 minutes)
I ran the document through all four perspectives. Notable findings:
Engineering: Our current notification queue couldn't handle projected volume at peak times. Recommended a separate queue for financial notifications.
UX Research: We had no data on how users categorize notifications mentally. Suggested a card sort study before finalizing taxonomy.
QA: Found seven edge cases, including: "What happens when a notification references a transaction that the user has disputed? Does the notification update, disappear, or stay with a stale status?"
Each finding was specific and actionable. I incorporated them into the document.
Step 6: Validation (3 minutes)
I ran a final validation check that scored the PRD across completeness dimensions: problem definition, user stories, success metrics, technical requirements, edge cases, rollout plan, and open questions.
Score: 87%.
The gap? Rollout plan didn't specify rollback criteria clearly enough. I added two sentences and moved on.
Total elapsed time: 50 minutes.
The Mistakes That Kill This Approach
I've taught this method to a handful of PMs on my team and in my network. The same mistakes come up every time:
Generating Before Thinking
Typing "write me a PRD for X" as the first prompt is the fastest way to get a useless document.
Without the Socratic questioning phase, the output lacks the nuance of your specific context. The AI doesn't know what it doesn't know. The questioning phase is where you surface the unknowns.
Accepting Vague Metrics
If your PRD says "increase engagement," that's not a metric.
Push for specifics:
- What engagement metric?
- What's the baseline?
- What target makes this worth building?
- What's the measurement methodology?
AI will default to vague metrics unless you push it toward specifics during exploration.
Skipping the Challenge Step
When Claude Code proposes an approach, your instinct might be to accept the first thing that sounds reasonable.
Push back.
Ask: "What's wrong with this approach?" and "What would the strongest argument against this be?"
The AI will generate counterarguments that strengthen your thinking.
Treating AI Output as Final
The 45-minute number is for a strong first draft.
You should still:
- Read every line
- Check that metrics are grounded in reality
- Validate technical requirements with your engineering lead
AI handles the synthesis. You own the judgment.
Try It This Week
I packaged this entire workflow into a free, hands-on module.
It's not a course about theory — it's a set of tools you install on your machine and use with Claude Code.
The PRD Generator gives you:
- The four techniques described above
- Templates for each phase
- Validation scoring
- Multi-perspective review framework
Module 0 (Claude Code setup): 20 minutes Module 1 (PRD Generator): 30 minutes to install and run your first document
Start here: theainativepm.com
The Real Test
The best way to evaluate this is to try it on a real PRD you need to write this week.
Not a toy example. A real feature your team is building.
That's when you'll feel the difference.
Not just in time saved — but in the quality of questions you ask yourself before writing, the edge cases you catch early, and the review cycles you avoid.
Six hours to 45 minutes isn't magic.
It's just better thinking, structured better, with the right tools.
Product Manager at Careem (Uber), building payments for 50M+ customers. Previously at Visa and RAENA. I write about practical AI workflows for product managers.
More about me →