Software developers are familiar with design patterns. The Gang of Four cataloged reusable solutions to recurring problems in object-oriented design. You learn patterns like Strategy, Observer, and Factory not because they are theoretically interesting but because they solve problems you encounter repeatedly. Once you know the pattern, you recognize the problem and reach for a proven solution.
Prompt patterns serve the same purpose for AI collaboration. They are reusable approaches to recurring challenges in the Generate phase. Like design patterns, they have names, contexts where they apply, and structures you can follow. Like design patterns, they become more valuable as you internalize them and recognize when to apply them.
This post covers three foundational patterns. The next post will cover three more, plus guidance on building your team’s pattern library.
Why Patterns Matter
Without patterns, every generation is an improvisation. You face a task, think about how to prompt the AI, and construct an approach from scratch. Sometimes this works well. Sometimes it does not. The inconsistency means you cannot predict outcomes or improve systematically.
Patterns provide structure for this process. When you recognize that a task is too complex for single-shot generation, you reach for the Decomposition Pattern. When you need the AI to match your codebase’s style, you reach for the Exemplar Pattern. When the AI keeps introducing unwanted elements, you reach for the Constraint Pattern.
Over time, patterns become habitual. You stop thinking about how to structure your prompts and start thinking about which pattern fits the situation. This frees cognitive resources for the actual problem: what should this code do?
Patterns also create shared vocabulary within teams. When a colleague says “I used the Exemplar Pattern with our auth module,” everyone understands what that means. This shared vocabulary accelerates code review, knowledge sharing, and onboarding.
The Decomposition Pattern
When to use: The task is too complex or too large for reliable single-shot generation. You observe that the AI produces partial solutions, loses track of requirements, or generates inconsistent code when given the full specification at once.
Core idea: Break the complex task into smaller, self-contained units. Generate each unit separately with focused context. Integrate the units after each passes evaluation.
How it works:
Decomposition is not just splitting a task into arbitrary pieces. Effective decomposition follows the natural boundaries of the code: layers, concerns, or dependency order.
Decompose by layer. If you need a feature that spans data access, business logic, and API endpoints, generate each layer separately. Start with the data layer because other layers depend on it. Then generate business logic that uses the data layer. Then generate the API endpoint that uses the business logic. Each generation has focused context and a clear specification.
Decompose by concern. If you need a complex function that validates input, transforms data, and handles errors, consider generating each concern separately. Generate the validation logic with its own specification. Generate the transformation logic with its own specification. Then generate the orchestrating function that combines them.
Decompose by dependency order. Identify which pieces other pieces depend on. Generate the foundational pieces first, validate them, then provide them as context for dependent pieces. This ensures each generation builds on verified code rather than assumptions.
Example:
Suppose you need to build a user registration system that validates input, checks for duplicate accounts, creates the account, sends a verification email, and returns appropriate responses.
Instead of one massive specification, decompose:
First generation: Input validation function. Specification focuses on validation rules, error messages, and the ValidationResult type.
Second generation: Duplicate check function. Specification focuses on the database query, what constitutes a duplicate, and the response type. Provide the ValidationResult type from the first generation as context.
Third generation: Account creation function. Specification focuses on creating the database record, generating the verification token, and handling creation failures. Provide the validated input types and duplicate check interface as context.
Fourth generation: Email sending function. Specification focuses on composing and sending the verification email. Provide the account type and verification token type as context.
Fifth generation: Registration orchestrator. Specification focuses on calling each function in order, handling errors at each step, and returning the appropriate response. Provide the interfaces of all four functions as context.
Each generation is tractable. Each produces code you can evaluate independently. The final integration is straightforward because each component has a clear interface.
When not to use: When the task is simple enough for reliable single-shot generation. Decomposition adds overhead. If the AI consistently produces good results for a task in a single generation, decomposition is unnecessary complexity.
The Exemplar Pattern
When to use: You need generated code to match specific patterns, conventions, or styles from your existing codebase. Verbal descriptions of your conventions are insufficient or impractical. The AI keeps producing code that is functionally correct but stylistically inconsistent with your system.
Core idea: Instead of describing what you want, show what you want. Include concrete examples from your codebase as part of the specification. The AI learns your patterns from the examples and replicates them in the generated code.
How it works:
Select one or two examples from your codebase that embody the patterns you want the generated code to follow. Include these examples in your specification with annotations explaining what aspects of the examples should be emulated.
The key is choosing the right examples. Good exemplars:
Are structurally similar to the target code. If you are generating an API endpoint, provide an example of an existing API endpoint. If you are generating a data transformation, provide an existing data transformation.
Are clean and representative. Choose examples that follow your conventions correctly, not examples that are historical exceptions or known technical debt. The AI will emulate what you show, including flaws.
Are annotated with what matters. Do not just paste code. Explain what aspects the AI should emulate. “Note the error handling pattern in lines 15 through 22” or “Follow the same response structure” or “Use the same logging approach.” Without annotation, the AI may emulate the wrong aspects.
Example:
Suppose you need a new API endpoint for updating user preferences. Your codebase has a consistent pattern for API endpoints.
Your specification includes the functional requirements and constraints, plus:
“Follow the patterns established in the existing endpoint below. Specifically, emulate:
- The input validation approach using our validate() decorator
- The error handling pattern with specific error types
- The response structure with data and metadata fields
- The logging pattern with structured context”
@router.put("/users/{user_id}/profile")
@validate(UpdateProfileSchema)
async def update_profile(user_id: UUID, data: UpdateProfileRequest) -> APIResponse:
logger.info("profile_update_started", user_id=str(user_id))
try:
user = await user_service.get_user(user_id)
if not user:
raise NotFoundError(f"User {user_id} not found")
updated = await user_service.update_profile(user, data)
logger.info("profile_update_completed", user_id=str(user_id))
return APIResponse(
data=UserProfileResponse.from_model(updated),
metadata={"updated_at": updated.updated_at.isoformat()}
)
except NotFoundError:
raise
except ValidationError as e:
raise BadRequestError(str(e))
except Exception as e:
logger.error("profile_update_failed", user_id=str(user_id), error=str(e))
raise InternalError("Failed to update profile")Code language: JavaScript (javascript)
The AI now has a concrete model to follow. The generated endpoint for updating preferences will naturally adopt the same validation decorator, error handling pattern, response structure, and logging approach. This produces code that fits your system far better than verbal instructions like “use consistent error handling.”
When not to use: When you are deliberately establishing new patterns that differ from existing ones. When your existing examples are poor quality and should not be replicated. When the task is so different from anything in your codebase that examples would be misleading rather than helpful.
The Constraint Pattern
When to use: The AI keeps introducing elements you do not want. It reaches for external libraries when you need pure standard library code. It uses patterns that do not fit your architecture. It generates code that works but violates your team’s conventions or your system’s requirements.
Core idea: Define what the code must not do, not just what it should do. Explicit constraints prevent the AI from making unwanted choices that look reasonable in isolation but are wrong for your context.
How it works:
Add a constraints section to your specification that lists prohibited approaches, patterns, libraries, or behaviors. Be specific. Vague constraints like “keep it simple” are open to interpretation. Precise constraints like “do not use any external dependencies” leave no room for misinterpretation.
Categories of useful constraints:
Dependency constraints. “Use only the Python standard library. No third-party packages.” “Use our existing database utility module, do not import SQLAlchemy directly.” “Do not add new npm dependencies.”
Pattern constraints. “Do not use inheritance for this; use composition.” “Do not use global state.” “Do not use callback-based async; use async/await throughout.” “Do not use the singleton pattern.”
Performance constraints. “Do not load the entire dataset into memory; use streaming.” “Do not make sequential API calls; use concurrent requests.” “Do not use recursive approaches for inputs exceeding 1,000 elements.”
Security constraints. “Do not log any field from the UserCredentials object.” “Do not include secrets in error messages.” “Do not use string concatenation for SQL queries; use parameterized queries exclusively.”
Compatibility constraints. “Do not use features introduced after Python 3.9.” “Do not use ES modules; this project uses CommonJS.” “Do not use any API marked as deprecated in our codebase.”
Style constraints. “Do not use abbreviations in variable names.” “Do not use default exports.” “Do not use ternary expressions for complex conditions.”
Example:
Suppose you need a function to parse configuration files.
Your specification includes the functional requirements, plus:
“Constraints:
- Do not use any external libraries. Use only Python standard library modules (json, configparser, os, pathlib).
- Do not use eval() or exec() for any purpose.
- Do not read from environment variables directly; accept an optional env_overrides dict parameter instead.
- Do not raise generic Exception; use ConfigurationError for all error conditions.
- Do not use mutable default arguments.
- Do not cache results at module level; return fresh results each time.”
Without these constraints, the AI might reasonably reach for PyYAML or toml libraries, use eval for dynamic config processing, read environment variables directly, raise generic exceptions, or cache for performance. Each of these choices could be defensible in isolation but wrong for your specific context.
Combining constraints with exemplars. The Constraint Pattern is often most powerful when combined with the Exemplar Pattern. Show an example of code that follows your conventions and explicitly state what must not change. “Follow the pattern in the example above. Do not modify the error handling approach or the logging structure.”
When not to use: When you have so many constraints that they effectively dictate the implementation. If your constraints specify every detail, you have written the code in constraint form and the AI is just translating. At that point, consider whether the AI is adding value or whether writing the code directly would be faster.
Combining Patterns
These patterns are not mutually exclusive. Complex tasks often benefit from multiple patterns applied together.
A common combination: use the Decomposition Pattern to break a complex task into units, then use the Exemplar Pattern on each unit to ensure consistency, and add the Constraint Pattern to prevent known problematic approaches.
Another combination: use the Constraint Pattern to define system-wide rules in your system prompt, then use the Exemplar Pattern in individual specifications. The constraints prevent global problems while the exemplars guide local implementation.
The goal is not to use as many patterns as possible but to use the patterns that address the specific challenges of each generation task. With practice, pattern selection becomes intuitive. You recognize the type of challenge and reach for the appropriate pattern without deliberate analysis.
Building Pattern Awareness
Start by noticing when generation produces unwanted results. Each failure is an opportunity to identify which pattern would have prevented it.
If the AI produced code that was too complex or lost coherence, the Decomposition Pattern might have helped.
If the AI produced functionally correct code that did not match your conventions, the Exemplar Pattern might have helped.
If the AI made choices you did not want, the Constraint Pattern might have helped.
In the next post, I will cover three more patterns: Iterative Refinement, Verification, and Persona. I will also discuss how to build and maintain a team pattern library that evolves with your experience.
Let’s Continue the Conversation
Which of these patterns addresses your most common generation frustration?
Have you developed your own prompt patterns that reliably produce good results? What problems do they solve?
Share your patterns in the comments. The best pattern libraries are built from collective experience.