In the previous post, I introduced three foundational prompt patterns: Decomposition for breaking complex tasks into manageable units, Exemplar for teaching by example, and Constraint for defining boundaries. These patterns address the most common generation challenges.
This post completes the catalog with three more patterns, then addresses the practical question of building and maintaining a pattern library for your team.
The Iterative Refinement Pattern
When to use: The task is complex enough that first-attempt output is unlikely to be complete or correct, but the general direction is likely to be useful. You want to build toward a solution through structured conversation rather than hoping for perfection in one shot.
Core idea: Treat the first generation as a draft. Use structured follow-up prompts to refine specific aspects while preserving what works. Each iteration addresses a defined concern rather than asking the AI to vaguely “improve” the output.
How it works:
Iterative Refinement is not the same as the ad-hoc iteration that characterizes unstructured AI usage. In unstructured usage, you see something wrong, mention it, and hope the AI fixes it. In the Iterative Refinement Pattern, you plan the iteration sequence before you start.
The structure follows a predictable sequence.
First generation: Core logic. Provide the full specification and ask for the core implementation. Do not expect perfection. Expect a working foundation.
Second generation: Error handling. Review the core logic. If the foundation is sound, ask the AI to add comprehensive error handling. Provide specific error scenarios and expected responses. “Now add error handling for the following scenarios: [list]. Use our ErrorResult type as shown in the specification.”
Third generation: Edge cases. With core logic and error handling in place, address edge cases. “The current implementation does not handle [specific edge case]. Add handling for: [list with expected behavior for each].”
Fourth generation: Optimization. If performance matters, address it after correctness is established. “The current implementation loads all records into memory. Refactor to use streaming with a batch size of 100, preserving the existing interface and error handling.”
Each iteration specifies what to change and what to preserve. This prevents the common problem where fixing one thing breaks another. “Preserve the existing validation logic and error handling. Only modify the data retrieval approach.”
Key discipline: specify what to preserve. Without explicit preservation instructions, the AI may rewrite working code alongside the changes you requested. Always state what should not change alongside what should.
Example:
First prompt: “Generate the OrderProcessor class per the specification. Focus on the core processing pipeline: validate order, check inventory, calculate pricing, and create the order record.”
Review output. Core pipeline is correct. Error handling is minimal.
Second prompt: “The core pipeline looks correct. Now add error handling without modifying the existing pipeline logic. Handle these specific scenarios:
- InsufficientInventoryError when requested quantity exceeds available
- PricingCalculationError when product pricing data is missing
- OrderCreationError when database insertion fails Each error should include the order_id and a descriptive message. Log errors using our structured logging pattern shown in the specification.”
Review output. Error handling is correct. Edge case with zero-quantity line items is not handled.
Third prompt: “Error handling looks good. Preserve all existing logic. Add handling for these edge cases:
- Order with zero line items: raise EmptyOrderError before processing
- Line item with zero quantity: skip the line item and log a warning
- Duplicate product IDs in line items: merge quantities before processing”
Each iteration is focused, specific, and builds on verified work.
When not to use: When the task is simple enough for single-shot generation. When the first output is fundamentally wrong (not just incomplete). If the foundation is flawed, iterating on it compounds the problem. Regenerate from an improved specification instead.
The Verification Pattern
When to use: You want the AI to review its own output before you invest time in detailed evaluation. You want to catch surface-level issues like missing error handling, unused imports, or inconsistencies with the specification.
Core idea: After generation, ask the AI to check its output against your specification. Provide the specification again and ask specific verification questions.
How it works:
After the AI generates code, prompt it to verify specific aspects:
“Review the code you generated against the specification. Specifically check:
- Does the function handle all error cases listed in the specification?
- Are all input validation rules implemented?
- Does the response format match the specified schema?
- Are there any unused imports or variables?”
The AI reviews its own output and often catches obvious issues: a missing error case, an incorrect return type, a validation rule that was specified but not implemented.
Important limitations:
The Verification Pattern has real value, but it also has real limitations that you must understand to use it effectively.
The AI has the same blind spots when verifying as when generating. If the AI’s training data leads it toward a particular approach, verification using the same model will often confirm that approach. The biases that produce errors can also prevent detecting them.
Verification is better for surface-level issues than deep logic errors. The AI can reliably check whether all specified error cases are handled. It is less reliable at identifying subtle logic errors, race conditions, or security vulnerabilities. These require human evaluation.
Verification can produce false confidence. When the AI reports “all checks pass,” you might relax your own evaluation. This is dangerous. Treat AI verification as a preliminary filter, not a substitute for human evaluation. It catches some issues before you look, saving you time, but it does not eliminate the need for you to look.
Do not ask “is this code correct?” This question is too vague and invites a confident “yes.” Ask specific questions that have concrete answers. “Does the function return a Decimal type as specified?” is verifiable. “Is this code correct?” is not.
Example:
After generating a data validation function:
“Before I review the code, please verify the following against the original specification:
- List each validation rule from the specification and confirm whether it is implemented in the code.
- For each error condition in the specification, show me the line that handles it.
- Check whether any specified edge case (empty string, null input, string exceeding 255 characters) is not handled.
- Verify that error codes match the specification table exactly.”
The AI responds with a point-by-point review. It might discover that the 255-character limit check was implemented as 256, or that one error code does not match the specification. These are exactly the kind of issues that are easy to miss in manual review but straightforward for the AI to catch.
You still need to evaluate the code yourself. But the AI has already caught the easy issues, and your evaluation can focus on deeper concerns.
When not to use: When you need assurance about complex logic, security, or performance. These require human judgment. Also avoid relying on verification when the specification itself might be incomplete, since the AI will verify against what you specified, not against what you actually need.
The Persona Pattern
When to use: The generation task benefits from a specific perspective or expertise. You want the AI to prioritize certain concerns that a particular type of expert would prioritize.
Core idea: Establish an expertise context that shapes how the AI approaches the task. “You are a senior security engineer” focuses attention on security considerations. “You are a performance engineer” focuses attention on efficiency and scalability.
How it works:
Add a persona instruction to your prompt that establishes the perspective you want the AI to adopt:
“Approach this as a senior security engineer would. Prioritize input validation, output encoding, and defense in depth. Flag any patterns that could lead to injection, exposure of sensitive data, or privilege escalation.”
Or:
“Approach this as a database performance specialist. Prioritize query efficiency, appropriate indexing, and minimizing round trips. Flag any patterns that could cause N+1 queries or full table scans.”
When Persona helps:
Personas genuinely change output when they activate domain-specific knowledge the AI has but would not otherwise prioritize. Security-focused personas tend to produce more defensive code. Performance-focused personas tend to produce more efficient code. Accessibility-focused personas tend to catch ARIA issues that generic generation misses.
The effect is real but modest. Think of it as adjusting the AI’s attention, making it more likely to consider certain factors rather than fundamentally changing its capabilities.
When Persona is theater:
Personas do not give the AI capabilities it does not have. “You are the world’s best quantum computing expert” does not make the AI better at quantum computing. It just makes it more confident about quantum computing topics, which can actually be worse because increased confidence without increased capability produces more authoritative-sounding errors.
Personas also do not overcome fundamental model limitations. If the AI cannot reliably handle a type of task, adding a persona will not fix this. It may make the AI try harder, but trying harder with the same underlying capabilities produces marginal improvement at best.
A useful heuristic: if the persona activates knowledge the AI already has, it helps. If the persona claims expertise the AI does not have, it is theater.
Example:
Suppose you need to generate a user authentication endpoint.
Without persona: “Generate an authentication endpoint per the specification.” The AI produces a functional endpoint. It may or may not include rate limiting, brute-force protection, or timing-safe comparison.
With persona: “Approach this as a senior security engineer reviewing authentication code for a financial services company. Generate an authentication endpoint per the specification. In addition to the functional requirements, ensure the implementation addresses: brute-force protection, timing-safe password comparison, secure session handling, and appropriate audit logging.”
The persona focuses the AI’s attention on security concerns that the specification might not have enumerated in detail. The output is more likely to include security best practices that a security-focused developer would consider standard.
Notice that the persona works here because the AI has substantial training data about authentication security. The persona activates relevant knowledge rather than claiming knowledge that does not exist.
When not to use: When the task does not benefit from a particular perspective. When you are tempted to use persona as a substitute for a clear specification. “You are an expert at understanding what I want” is not a useful persona. Specification should provide clarity, not persona.
Building Your Team’s Pattern Library
Six patterns are a starting point. Your team will develop additional patterns through practice. The challenge is capturing and sharing what works.
Document patterns when they produce consistently good results. If a particular approach to generation works reliably across multiple tasks, it is a candidate for your pattern library. Document it with a name, a description of when to use it, how to structure it, and an example.
Use a consistent format. Each pattern entry should include: the pattern name, when to use it (the problem it solves), how to structure it (the approach), a concrete example, and when not to use it (limitations).
Share patterns through code review. When reviewing AI-generated code, discuss which patterns were used and how well they worked. This normalizes pattern usage and spreads knowledge about what is effective.
Include negative examples. When a pattern fails, document that too. “We tried using the Persona Pattern for database optimization, but the AI produced recommendations that were not compatible with our PostgreSQL version.” Negative examples prevent teammates from repeating failed approaches.
Retiring Patterns That Stop Working
AI models change. Patterns that worked with one model version may not work with another. The effectiveness of specific prompting approaches can shift as models are updated.
Watch for patterns that used to produce good results but have degraded. If a pattern starts producing inconsistent output, test whether it is the pattern or the task. If the pattern works for other tasks but not for new ones, the new tasks might need different patterns. If the pattern fails broadly, it may be time to retire or revise it.
Do not hold onto patterns out of habit. The goal is effectiveness, not tradition. Your pattern library should be a living collection that adapts to the tools you use.
The Generate Phase Is Complete
With the specification templates from Post 6, the generation disciplines from Post 7, and the prompt patterns from Posts 8 and 9, you have a comprehensive toolkit for the Generate phase of ADD.
But generation is only the middle of the cycle. The code it produces is a candidate, not a conclusion. In the next post, I will begin covering the Evaluate phase: why human judgment is non-negotiable, what dimensions of evaluation matter, and how to build the skills and habits that catch what AI confidence obscures.
Let’s Continue the Conversation
Which of the six patterns across these two posts do you use most frequently? Have you found any that do not work as expected?
What patterns has your team developed that are not covered here?
Share your experience in the comments. The best pattern libraries come from collective practice, and your discoveries can help others generate more effectively.