Skip to content

Signal Through the Noise

Honest takes on code, AI, and what actually works

Menu
  • Home
  • My Story
  • Experience
  • Services
  • Contacts
Menu

Generate: The Art of Effective AI Collaboration

Posted on January 31, 2026January 30, 2026 by ivan.turkovic

Generation is where the visible work happens. You provide input, and the AI produces code. This is the moment most developers think of when they imagine AI-assisted development. It is also where most developers start, jumping directly to generation without the specification work that should precede it.

In the ADD cycle, generation is the second phase, not the first. This ordering is deliberate. Generation without specification produces code built on hidden assumptions. Generation with specification produces code you can evaluate against explicit criteria.

But even with a good specification, generation requires its own disciplines. Context management, iteration strategy, and resistance to premature acceptance all determine whether generation produces useful output or creates more work than it saves.

Generation in Context

The Generate phase sits between Specify and Evaluate. This position shapes how generation should work.

From specification, generation receives clear requirements, constraints, and context. The specification defines what success looks like. It provides the criteria against which output will be judged. Without this input, generation is guessing. With it, generation has direction.

To evaluation, generation provides candidate code. This code is not finished until it passes evaluation. The discipline of the Generate phase is to produce output worth evaluating, not to produce final code. Treating generated code as a draft rather than a finished product changes how you interact with the AI and how you respond to its output.

This positioning means generation is not about getting the AI to produce perfect code on the first try. It is about producing code that, combined with human evaluation and refinement, efficiently reaches a correct solution. Sometimes the first generation is exactly right. Often it requires iteration. The measure of effective generation is total time to correct code, not impressiveness of initial output.

Context Management: What to Include

The AI generates code based on the context you provide. Better context produces better output. But context management is not simply about providing more information. It requires judgment about what helps and what creates noise.

Always include the specification. This seems obvious, but developers often paraphrase or summarize their specification rather than providing it directly. Give the AI the complete specification you wrote. The precision you invested in writing it should be preserved in providing it.

Include relevant code examples. If your specification references patterns from your codebase, include those patterns. If the new code must implement an interface, include that interface definition. If there are similar functions the AI should emulate, include them. Concrete examples communicate more than abstract descriptions.

Include interface definitions. When the generated code must interact with existing code, provide the signatures, types, and contracts it must respect. The AI cannot match interfaces it cannot see. This includes not just function signatures but data types, error types, and any protocols the code must follow.

Include error handling patterns. If your codebase has a consistent approach to error handling, show it. Include an example of how errors are created, propagated, and logged. Without this context, the AI will invent its own error handling, which may not match your conventions.

Include relevant configuration. If the code depends on configuration values, environment variables, or feature flags, describe them. The AI needs to know what can be configured and what is fixed.

Context Management: What to Exclude

More context is not always better. Excessive or irrelevant context creates noise that can degrade output quality and consume context window space that could be used for more relevant information.

Exclude irrelevant code. If you are generating a validation function, the AI does not need to see your entire data access layer. Include only code that the new code will interact with or should emulate. Resist the temptation to provide “everything just in case.”

Exclude excessive conversation history. In multi-turn conversations, earlier exchanges may no longer be relevant. If you have iterated through several approaches and settled on a direction, summarize that direction rather than including the full history of dead ends. The AI will give weight to information in its context, including information about abandoned approaches.

Exclude redundant information. If your specification already describes the error handling approach, you do not need to repeat it in additional context. Redundancy creates opportunities for inconsistency if the repetitions differ slightly.

Exclude speculative requirements. Include what the code must do now, not what it might need to do later. Future requirements create ambiguity about current scope. If extensibility is a requirement, state it as a constraint, but do not include detailed specifications for features you are not building yet.

The goal is a context that is complete without being cluttered. Every piece of context should earn its place by contributing to better generation.

The Role of System Prompts

System prompts establish baseline context that applies to all generations in a session or project. They are particularly valuable for context that would otherwise need to be repeated in every specification.

Effective system prompts include:

Technology stack and versions. “All code should target Python 3.11 and use type hints throughout.”

Coding conventions. “Follow PEP 8 for formatting. Use snake_case for functions and variables, PascalCase for classes.”

Architectural patterns. “This codebase uses a hexagonal architecture. Business logic should be independent of infrastructure concerns.”

Security requirements. “All user input must be validated. Never log sensitive data including passwords, tokens, or personal identifiers.”

Error handling philosophy. “Prefer explicit error types over generic exceptions. All functions that can fail should document their failure modes.”

System prompts should be stable and broadly applicable. Requirements specific to a particular task belong in the specification, not the system prompt. The system prompt establishes the environment; the specification defines the task.

Teams should maintain their system prompt as a shared artifact. When conventions change, the system prompt should be updated. When new team members join, the system prompt teaches them the baseline expectations.

Iterative Generation: Using Output to Refine

The first generation is often not the final answer. Effective use of AI involves iteration, but iteration should be purposeful rather than random.

Use generation failures to improve specification. When the AI produces unexpected output, ask why. Often the answer is that your specification was ambiguous in ways you did not recognize. The AI’s interpretation, even if wrong, reveals where your specification could be clearer. Update the specification and regenerate.

Distinguish specification gaps from generation errors. If the AI produced exactly what you asked for but not what you wanted, that is a specification problem. If the AI misinterpreted a clear specification, that is a generation problem. The response differs: specification problems require specification revision; generation problems may require prompt adjustment or regeneration.

Build incrementally for complex tasks. If a task is too complex for reliable single-shot generation, decompose it. Generate the core logic first, validate it, then generate the supporting code. Each generation builds on verified output rather than accumulating errors.

Know when to regenerate versus refine manually. Sometimes the generated code is close enough that manual editing is faster than another generation round. Sometimes the generated code is so far off that starting fresh is better. Develop judgment about which approach saves time. As a rough heuristic, if you would need to change more than 30% of the code, regeneration may be faster.

Multi-Turn Versus Single-Shot Approaches

Generation can happen in a single exchange or across multiple turns of conversation. Each approach has strengths.

Single-shot generation provides the specification and receives complete output in one exchange. This works well when:

  • The task is well-defined and moderate in complexity
  • The specification is complete and unambiguous
  • You want to avoid context accumulation from conversation history

Multi-turn generation builds output through conversation. This works well when:

  • The task is exploratory and you are discovering requirements
  • You want to review intermediate output before proceeding
  • The full task exceeds what can be reliably generated at once

Multi-turn conversations accumulate context, which has both benefits and costs. The AI retains information from earlier in the conversation, which can help with consistency. But it also retains abandoned approaches and corrections, which can create confusion. For complex multi-turn generations, periodically summarize the current state and clear irrelevant history.

A hybrid approach often works well: use multi-turn conversation to explore and refine the specification, then use single-shot generation with the finalized specification to produce the actual code.

Managing Context Window Limitations

Current AI models have context window limits. When your specification, context, and conversation history exceed these limits, information gets truncated. Managing this constraint is part of effective generation.

Prioritize recent and relevant information. If truncation occurs, it typically affects earlier content. Structure your prompts so that the most critical information appears in the specification and recent context, not in distant conversation history.

Use summarization for long conversations. If a conversation has gone on long enough that relevant information might be truncated, pause and summarize. Create a new message that captures the current state: what has been decided, what the current task is, what constraints apply. This summary becomes the effective context even if earlier messages are truncated.

Break large tasks into smaller generations. If a task requires more context than fits in the window, decompose it. Generate components separately with focused context, then integrate them. This keeps each generation within context limits and makes each subtask more tractable.

Externalize stable context. System prompts, coding standards, and architectural documentation can be maintained as separate artifacts referenced by the specification rather than included in full. “Follow the error handling conventions documented in CONVENTIONS.md” is more compact than including the full conventions inline.

The Discipline of Not Accepting Plausible Output

In my post on the unstructured AI problem, I described the “looks reasonable” trap: AI-generated code appears confident and coherent, which makes it easy to accept without sufficient scrutiny. The Generate phase requires active resistance to this trap.

Plausible is not correct. Code that looks right and code that is right are different things. The AI produces output that matches patterns in its training data. Those patterns may not match your specific requirements, even if they seem reasonable in isolation.

Confidence is not accuracy. The AI presents all output with the same confident tone. It does not signal uncertainty, even when uncertain. Treat all generated code as requiring verification, regardless of how authoritative it sounds.

First impressions are unreliable. The initial read of generated code often focuses on whether it looks like code that would work. This is a low bar. The question is not whether the code looks plausible but whether it meets your specification.

The discipline is to hold generated code in a tentative state until evaluation confirms its quality. This means resisting the urge to immediately integrate code that seems fine. It means maintaining skepticism even when the AI produces something impressive. It means remembering that generation is the middle of the process, not the end.

When Generation Reveals Specification Gaps

Sometimes the most valuable output of generation is not the code but the realization that your specification was incomplete. This is not a failure of generation. It is generation working correctly by revealing what you did not know you did not know.

Signs that generation has revealed a specification gap:

  • The AI asked clarifying questions you had not considered
  • The generated code handles cases you did not specify
  • The generated approach differs from what you imagined, but you cannot point to a specification requirement it violates
  • You find yourself wanting to add constraints you did not originally include

When this happens, return to the Specify phase. Update your specification to address the discovered gap. Then regenerate with the improved specification. This loop between Specify and Generate is normal and valuable. It means the process is surfacing ambiguity at low cost.

Do not try to patch specification gaps through prompt iteration alone. If the specification is genuinely incomplete, fix the specification. Prompting around a specification gap creates code that addresses the gap in ad-hoc ways, making evaluation harder and creating inconsistency if similar gaps arise in future tasks.

From Generation to Evaluation

Generation produces candidate code. That code enters the Evaluate phase, where human judgment determines whether it meets the specification and fits the system.

The transition from Generate to Evaluate should be crisp. Do not blur the boundary by informally evaluating during generation and accepting code that “seems good enough.” Complete the generation with code that addresses the full specification, then evaluate systematically.

In the next two posts, I will explore prompt patterns that make generation more effective. These patterns are reusable approaches to common generation challenges: decomposing complex tasks, teaching by example, constraining output, and more. The patterns build on the foundation of good specification and disciplined generation covered here.

For now, practice the disciplines of context management and iteration. Notice when context helps and when it hurts. Notice when iteration is productive and when it is spinning. Notice when you are tempted to accept plausible output before evaluation confirms its quality.

The Generate phase is where AI capability becomes visible. Making that capability reliable is the work of ADD.


Let’s Continue the Conversation

What context management strategies have you found effective? What signals tell you that context is helping versus creating noise?

How do you decide when to iterate versus when to accept generated output?

Share your approaches in the comments. Effective generation is a skill that develops through practice and shared learning.

Leave a Reply Cancel reply

You must be logged in to post a comment.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

  • Instagram
  • Facebook
  • GitHub
  • LinkedIn

Recent Posts

  • Generate: The Art of Effective AI Collaboration
  • Specification Templates: A Practical Library for AI Development
  • The Quiet Builders: A History of Introverts in Engineering and What AI Means for the Future
  • Specify: The Most Important Skill in AI-Driven Development
  • From Waterfall to ADD: Why AI Demands Its Own Methodology

Recent Comments

  • Top AI Code Bugs: Semantic Errors, API Misuse, and Security Risks Unveiled – Trevor Hinson on Code Is for Humans, Not Machines: Why AI Will Not Make Syntax Obsolete
  • ADD: AI-Driven Development Methodology for Modern Engineers on The Future Engineer: What Software Development Looks Like When AI Handles the Code
  • The Future Engineer: Skills for AI-Era Software Development on Contact Me
  • A CTO Would Be Bored by Tuesday - Signal Through the Noise on Contact Me
  • What I Wrote About in 2025 - Ivan Turkovic on From Intentions to Impact: Your 2025 Strategy Guide (Part 2)

Archives

  • January 2026
  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • May 2025
  • April 2025
  • March 2025
  • January 2021
  • April 2015
  • November 2014
  • October 2014
  • June 2014
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • April 2012
  • October 2011
  • September 2011
  • June 2011
  • December 2010

Categories

  • ADD Methodology
  • AI
  • AI development
  • AI-Driven Development
  • AngularJS
  • Artificial Intelligence
  • blockchain
  • Business Strategy
  • Career Development
  • development
  • Development Methodology
  • ebook
  • Introduction
  • leadership
  • mac os
  • personal
  • personal development
  • presentation
  • productivity
  • Requirements
  • ruby
  • ruby on rails
  • sinatra
  • Software Development
  • Software Engineering
  • Specification
  • start
  • startup
  • success
  • Uncategorized

Meta

  • Log in
  • Entries feed
  • Comments feed
  • WordPress.org
© 2026 Signal Through the Noise | Powered by Superbs Personal Blog theme