I spent three hours last Tuesday debugging an AI-generated feature that did exactly what I asked for.
That's the problem. It did exactly what I asked for. Unfortunately, what I asked for wasn't what I actually needed. The authentication flow was technically correct, but it didn't handle the edge case where users switch between SSO and password login. I hadn't mentioned that because I didn't think to mention it.
This happens constantly. You give Claude or ChatGPT a clear prompt, they build something impressive, and then you realise you forgot to specify something obvious. Back to the drawing board. Another round of iterations. More wasted tokens.
Turns out there's a better way. And it's embarrassingly simple: let the AI ask you questions first.
The Prompt Paradox: Why Perfect Prompts Don't Exist
Here's what I've finally accepted after two years of AI-assisted development: I'm terrible at writing prompts for complex features. Not because I can't communicate clearly. Because I don't know what I don't know.
When I say "build me a user authentication system," I've got a mental model of what that means. Login form, password reset, maybe OAuth. What I'm not thinking about is session timeout behaviour, account lockout policies, password complexity rules, multi-device session management, or what happens when someone changes their email address mid-session.
Those aren't exotic edge cases. They're requirements I'll discover the hard way, three iterations into implementation, when the feature is 80% complete and we've burned through $50 in API tokens.
Traditional software development solved this decades ago. You don't hand a developer a one-line requirement and expect them to build the right thing. You have requirements gathering sessions, stakeholder interviews, specifications. The developer asks clarifying questions before writing code.
Why were we treating AI differently? Why were we expecting a single prompt to contain everything a human interviewer would extract over an hour-long conversation?
Someone on Twitter finally articulated what should've been obvious all along.
The Inversion: Let AI Drive the Conversation
In late December 2025, Thariq, who works on Anthropic's Claude Code team, shared a workflow that had developers immediately reconsidering their approach:
"My favourite way to use Claude Code to build large features is spec based... ask Claude to interview you using the AskUserQuestionTool... then make a new session to execute the spec."
6,429 likes. 1.6 million views. Developers weren't just reading this, they were immediately trying it.
The core insight is deceptively simple: instead of you trying to anticipate every requirement upfront, let Claude ask you questions. It's trained on millions of software projects. It knows what edge cases exist. It knows what clarifications developers typically need. Let it extract that information from you.
Developers who tried this reported sessions with 40+ questions. By the end, they had a detailed specification document covering requirements they never would've thought to include in their original prompt.
That's not a gimmick. That's the difference between building something three times and building it right the first time.
How It Works: The Three-Phase Pattern
The Interview Technique breaks development into three distinct phases. It sounds bureaucratic, but each phase eliminates a category of waste.
Phase 1: The Interview
You start with a broad goal and let Claude drive the conversation. The prompt I've been using looks something like this:
I want to build [feature description]. Before writing any code, interview me thoroughly about this project. Ask about: - Technical requirements and constraints - User experience and interface preferences - Edge cases and error handling - Integration with existing systems - Performance and scalability needs - Security considerations Use the AskUserQuestionTool to gather information until you have enough to write a complete specification.Claude then starts asking questions. Not generic ones, but contextual questions based on what you've described. For an authentication system, it might ask: "What identity providers should be supported?" "Should sessions persist across browser restarts?" "What's your policy on concurrent sessions from multiple devices?"
You answer conversationally. You don't need to be comprehensive. Claude will follow up on anything that needs clarification.
This phase typically takes 10-20 minutes. It feels like being interviewed by a very thorough product manager who actually understands the technical implications of every answer.
Phase 2: The Specification
Once Claude has gathered enough information, it synthesises your answers into a specification document. Not your words regurgitated back, but a structured spec that captures requirements, constraints, and implementation details.
Justin Michel documented his workflow with specific instructions that have become a template for others:
"Create spec.md... Claude interviews using AskUserQuestionTool... be very in-depth... then write the output spec to the file."
3,523 likes. 322,000 views. The community was paying attention.
The specification becomes the source of truth. It captures everything discussed, organised logically. You review it, make corrections, and now you've got a document that represents your actual requirements, not your best guess at a prompt.
Phase 3: The Execution
Here's where it connects to the Ralph Wiggum technique.

The Ralph Wiggum Technique: Ship Code While You Sleep
A developer left Claude Code running for three months. It built a working compiler. Here's the absurdly simple technique that's changing how...
Read full articleYou start a new session. (This is important. Fresh context, no confusion from the interview conversation.) You give Claude the specification and tell it to implement.
Because the spec is comprehensive, Claude can work autonomously with fewer mid-implementation questions. It knows what authentication providers to support, what error messages to display, what happens when sessions expire. All those details you'd normally discover mid-build are already documented.
Combine this with a Ralph Wiggum loop, and you can set Claude to iterate autonomously until all tests pass against the specification. The interview extracts what you want. Ralph ensures it gets built correctly.
The Formula That Went Viral
About a week after Thariq's original tweet, the pattern evolved into something more specific. A designer named 0xDesigner distilled the approach into what they called "the formula":
"The formula for getting the most out of Claude Code: 'I want [goal/outcome]' + 'interview me thoroughly to extract ideas and intent' + ultrathink + (plan mode on) thank me later"
3,315 likes. 234,000 views. People were calling it a "cheat code."
What makes this formula work is the combination of elements:
Goal/outcome gives Claude the destination. Not implementation details, just where you're trying to go.
Interview thoroughly inverts the dynamic. Claude drives the conversation, extracting context you wouldn't have volunteered.
Ultrathink (Claude's extended reasoning mode) ensures Claude is actually processing your answers deeply, not just collecting them.
Plan mode keeps Claude from jumping into implementation before the interview is complete.
The result? Developers reported getting specifications that covered requirements they hadn't considered. Design tradeoffs became explicit upfront, when changes are cheap, not three iterations in when they're expensive.
Platform-Specific Implementation
Claude Code: Native Interview Support
Claude Code has the AskUserQuestionTool built in. It's designed for exactly this kind of interactive questioning. When Claude invokes it, you get a direct prompt for your response, maintaining conversation flow without context pollution.
Prompt: "Interview me about implementing a real-time notification system. Ask about delivery channels, user preferences, rate limiting, and offline handling using the AskUserQuestionTool until you have enough to write a complete spec."Claude will typically ask 15-40 questions depending on complexity. Be patient. The more questions it asks, the fewer surprises you'll encounter during implementation.
ChatGPT and Claude Web
The pattern works without native tool support too. You just need to be more explicit:
Before we implement anything, I want you to interview me about this project. Act as a senior technical architect gathering requirements. Ask me one question at a time about: - Core functionality and user flows - Technical constraints and existing systems - Edge cases and error handling - Performance requirements - Security needs Keep asking questions until you have enough to write a detailed specification. Start with your first question now.The one-question-at-a-time instruction is important. Otherwise, some models will dump ten questions at once, which defeats the conversational dynamic.
Cursor and Other Tools
Cursor IDE supports similar patterns through its Composer feature. Start a Composer session with interview instructions, let it ask questions in the chat interface, then have it generate a spec file that becomes the implementation reference.
The ScriptByAI requirements gathering CLI takes a different approach. It's an open-source tool that first analyses your existing codebase, then conducts two rounds of five yes/no questions about what you're trying to build. If you don't know an answer, you can say "idk" and it'll make a sensible default choice. At the end, it generates a markdown specification ready for implementation.
That's not quite as thorough as a full interview, but it's faster for smaller features where you don't need 40 questions.
When 75 Questions Is Too Many
Not everyone loves the interview approach.
Rob Zolkos, who built a custom /interview command for his Claude Code setup, reported sessions generating 75 questions for a "simple chat idea." The result was a 400-line plan. Comprehensive? Absolutely. Excessive? Maybe.
874 likes, 106,000 views, and a fair bit of discussion about interview fatigue.
Here's my take after running dozens of interview sessions: there's a sweet spot. Too few questions (under 10), and you miss important requirements. Too many questions (over 50), and you're wasting time on edge cases that may never materialise.
For small features, cap the interview at 10-15 questions. For major systems, 30-40 is reasonable. For "simple chat ideas," 75 is probably overkill.
You can control this with your prompt:
Interview me about this feature. Ask a maximum of 20 questions, focusing on the most critical requirements and likely edge cases. Skip questions about rarely-occurring scenarios.Claude will prioritise. It's pretty good at judging what's essential versus nice-to-know.
The Research Behind It
What surprised me was how much academic research supports this approach.
Anthropic's own research on AI interviewing tested a three-stage process: Planning, Interviewing, Analysis. They ran 1,250 participants through AI-conducted interviews that adapted based on responses.
The results were striking. 97.6% satisfaction rate. 99.12% would recommend the format. Interviews typically ran 10-15 minutes and extracted insights that participants reported they wouldn't have volunteered in a written questionnaire.
That's not about code. It's about conversation dynamics. AI interviews work because they're adaptive. The model asks follow-up questions based on your actual responses, not a fixed script. That's exactly what makes Interview Technique effective for requirements gathering.
On the requirements engineering side, research from the LLMREI project found that large language models can capture approximately 70% of intended requirements through interview-style questioning. They generate context-dependent questions similar to human interviewers, picking up on domain-specific terminology and asking relevant follow-ups.
70% might not sound impressive until you compare it to the alternative. A single-shot prompt with no interview? You're lucky to capture 30% of what you actually need. The rest emerges through painful iteration.
Combining Interview + Ralph Wiggum
The Interview Technique and Ralph Wiggum loops are complementary. They solve different problems in the development workflow.
Interview handles specification: extracting what you want BEFORE any coding happens.
Ralph handles execution: iterating UNTIL what you want actually works.
Roasbeef, CTO at Lightning Labs, made this connection explicit in early January discussions about intercepting AskUserQuestion calls to enable "proxied interview" patterns within Ralph loops. The idea: even during autonomous execution, the system can pause to gather clarification, then resume.
That's getting into advanced territory, but the principle is clear. Interview first, spec second, Ralph loop third. Each phase reduces waste in the subsequent phase.
Here's my current workflow for any feature that'll take more than a few hours:
- Interview session (15-30 mins): Let Claude extract requirements
- Spec review (10-15 mins): Verify the specification matches my intent
- Implementation (variable): Ralph loop with spec as the reference document
- Review (30-60 mins): Human verification of AI-generated code
That might look like more process than "just code it." It is. But I'm not spending three hours debugging features that did exactly what I asked for instead of what I needed.
What I've Learned Running This Workflow
I've been using Interview Technique for about four weeks now. Here's what's changed:
I'm writing better prompts without meaning to. Going through interviews has taught me what Claude actually needs to know. Now even my quick one-off prompts include more context because I've internalised what questions Claude would ask.
Requirements documents are actually useful. I used to write specs that nobody read, including me. Now the spec is the primary artifact. It's referenced throughout implementation. It's updated when requirements change. It's the source of truth that Ralph loops execute against.
Fewer "almost right" implementations. Before, I'd get a feature that was 80% correct and spend hours on the remaining 20%. Now the first implementation is usually 95%+ correct because the spec covered edge cases I'd have missed.
My clients like it. When I share the interview transcript and resulting spec with clients, they can see exactly what was discussed and what decisions were made. It's documentation that writes itself.
The tradeoff? It's slower to start. Instead of immediately seeing code, you're answering questions for 20 minutes. That feels unproductive until you realise you've just eliminated two or three implementation iterations.
Key Takeaways
For Individual Developers:
- Start every complex feature with an interview session
- Let Claude ask at least 15 questions before generating specs
- Save specifications as markdown files for reference during implementation
- Cap questions at 40-50 for even the most complex features
For Team Leads:
- Use interview transcripts as documentation
- Have junior developers run interview sessions with senior review of specs
- The spec becomes the acceptance criteria for feature completion
- Pair Interview Technique with Ralph loops for autonomous execution
For AI Tool Evaluators:
- This pattern works across Claude Code, ChatGPT, Cursor, and others
- Native interview tools (like AskUserQuestionTool) make it smoother but aren't required
- The underlying principle (AI asks questions before implementing) is platform-agnostic
- Expect 10-15 minute overhead per feature, offset by reduced iteration cycles
You don't need to be a better prompt engineer. You need to be a better product owner.
Stop writing specification documents disguised as prompts. Start having specification conversations instead. Claude's got questions. You've just got to give it permission to ask them.
---
Sources
- Thariq (@trq212). "My favorite way to use Claude Code to build large features is spec based..." Twitter/X. 28 December 2025. https://x.com/trq212/status/2005315275026260309
- 0xDesigner. "The formula for getting the most out of Claude Code..." Twitter/X. 3 January 2026. https://x.com/0xDesigner
- Justin Michel (@JustinMitchel). "Create spec.md... Claude interviews using AskUserQuestionTool..." Twitter/X. 30 December 2025. https://x.com/JustinMitchel
- Rob Zolkos (@robzolkos). Custom /interview command workflow. Twitter/X. 28 December 2025. https://x.com/robzolkos/status/2005379466886005243
- Anthropic Research. "Anthropic Interviewer: Research on AI-conducted interviews." Anthropic. 2025. https://www.anthropic.com/research/anthropic-interviewer
- LLMREI Research. "Large Language Models for Requirements Elicitation." arXiv. 2025. https://arxiv.org/html/2507.02564v1
- ScriptByAI. "Requirements Gathering with Claude Code." ScriptByAI Documentation. 2025. https://www.scriptbyai.com/requirements-gathering-claude-code/
- Atcyrus. "Claude Code AskUserQuestion Tool Guide." Atcyrus Stories. 2025. https://www.atcyrus.com/stories/claude-code-ask-user-question-tool-guide
Related Reading
- The Ralph Wiggum Technique: How Developers Are Shipping Code While They Sleep - The companion technique for autonomous execution after specification
- Claude Opus 4.5 Developer Verdict: Community Reaction - Why the latest Claude model makes both techniques more effective
