Skip to content

Signal Through the Noise

Honest takes on code, AI, and what actually works

Menu
  • Home
  • My Story
  • Experience
  • Services
  • Contacts
Menu

ADD in Context: Greenfield, Legacy, Refactoring, and Testing

Posted on February 16, 2026February 16, 2026 by ivan.turkovic

The ADD cycle is consistent across contexts: Specify, Generate, Evaluate, Integrate. But how you apply each phase changes depending on what you are building and where you are building it. Greenfield projects offer freedom that legacy codebases do not. Refactoring has constraints that new feature development lacks. Test generation inverts the typical flow.

This post covers how ADD adapts to different development contexts. The cycle remains the same, but the emphasis shifts.

ADD for Greenfield Projects

Greenfield projects offer maximum flexibility. There is no existing code to constrain your choices, no legacy patterns to follow, no technical debt to navigate. This freedom is valuable, but it comes with responsibility.

Establish patterns early. The first components you build become exemplars for everything that follows. If you generate a sloppy API endpoint early, that sloppiness propagates when you use it as an exemplar for subsequent endpoints. Invest extra time in the Specify and Evaluate phases for your first few generations. Get the patterns right, and the AI will replicate them. Get them wrong, and you will replicate the problems.

Build your context library as you go. In greenfield development, you do not have existing code to provide context. But after your first few ADD cycles, you do. Extract the patterns that work. Document the conventions you establish. Build the context library that will guide future generations. The earlier you start, the more consistent your codebase becomes.

Specify architectural decisions explicitly. The AI does not know your architectural intentions unless you state them. If you want clean architecture with clear layer separation, specify that. If you want a particular error handling approach, specify that. If you want specific naming conventions, specify those. In legacy codebases, the existing code implies these decisions. In greenfield, you must be explicit.

Resist the temptation to move fast and skip evaluation. Greenfield projects often have aggressive timelines. The temptation is to generate rapidly and defer careful evaluation. This is a mistake. Technical debt accumulates faster in greenfield projects because there are no existing patterns to guide the AI toward consistency. Thorough evaluation early prevents larger problems later.

Plan for iteration. Your first specifications will be imperfect. Your early code will need revision as you learn more about the domain and the system. Build with the expectation that you will revisit and refine. ADD cycles are not one-time events; they are iterative. The code from cycle one informs the specifications for cycle two.

Define boundaries early. In greenfield projects, module boundaries, API contracts, and data schemas are all undetermined. Define these boundaries in your specifications before generating implementations. Clear boundaries make each ADD cycle more focused and reduce the coupling between generated components. Changing boundaries later is expensive; getting them right early pays dividends throughout the project.

ADD for Legacy Codebases

Legacy codebases present the opposite challenge. You have context, perhaps too much of it. Existing patterns may be inconsistent, outdated, or undocumented. The code that exists constrains the code you can add.

Context is everything. Your specifications must account for how the existing system works, not just what you want the new code to do. Include existing interfaces, data structures, error handling patterns, and naming conventions in your specifications. The more context you provide, the better the AI can generate code that fits.

Use existing code as exemplars heavily. The Exemplar Pattern is particularly valuable in legacy contexts. Instead of describing the patterns you want, show them. Find the best existing examples of similar functionality and include them in your specification. The AI learns from concrete code, not abstract descriptions of conventions.

Evaluate fitness rigorously. In legacy codebases, the Fitness dimension of evaluation becomes paramount. Code that is correct in isolation may be wrong in context. Does the generated code follow the same patterns as surrounding code? Does it use the same error handling approach? Does it interact correctly with existing components? Fitness evaluation requires deep knowledge of the existing system.

Accept imperfection. Legacy codebases often have inconsistent patterns. You may need to generate code that follows a pattern you would not choose for new development, simply because that is what the existing system uses. Consistency with the existing system often matters more than ideal patterns. Document this as intentional technical debt when it occurs.

Specify constraints from existing patterns. Use the Constraint Pattern to prevent the AI from introducing approaches that conflict with the existing system. “Do not use async/await; this module uses callbacks throughout.” “Do not introduce new exception types; use the existing error handling framework.” These constraints keep generated code compatible with legacy patterns.

Plan for incremental improvement. You rarely rewrite legacy systems all at once. ADD in legacy contexts often means small improvements over time. Each cycle makes the system slightly better: clearer code, better tests, improved documentation. The cumulative effect of many small improvements can be substantial.

Map the existing system first. Before applying ADD to a legacy codebase, invest time understanding the existing system. What patterns does it use? What are its conventions? Where are the inconsistencies? This mapping exercise produces the context you need for effective specifications and the knowledge you need for accurate evaluation.

ADD for Bug Fixes

Bug fixes are a specific context where ADD applies differently. The specification comes not from new requirements but from the gap between expected and actual behavior.

Reproduction is specification. The bug report describes what goes wrong. Your specification should include the reproduction steps, the expected behavior, and the actual behavior. The more precisely you can describe the conditions under which the bug occurs, the better the AI can generate a fix.

Include the failing test. If you have a test that demonstrates the bug, include it in your specification. “This test should pass but currently fails. Generate a fix that makes the test pass without breaking other tests.” The test serves as an executable specification that the AI can target.

Evaluate for regression. When fixing bugs, the primary evaluation concern is regression. Does the fix break anything else? Does it handle the specific case while preserving correct behavior for other cases? Run the full test suite, not just the test for the bug being fixed.

Trace the root cause. The AI might generate a fix that addresses symptoms rather than the root cause. Evaluate whether the fix addresses the underlying problem or just patches over it. A symptom fix may resolve the immediate issue but leave the system vulnerable to similar bugs.

Document the fix context. Bug fixes benefit from documentation: what was the bug, why did it occur, how does the fix address it? This context helps future developers understand the code and prevents reintroduction of the same bug.

ADD for Refactoring

Refactoring changes code structure without changing behavior. This constraint fundamentally shapes how ADD applies.

Behavior preservation is the primary constraint. Your specification must be explicit: the refactored code must behave identically to the original. Include the original code, describe the structural changes you want, and emphasize that behavior must not change. “Refactor this function to use early returns instead of nested conditionals. The function must produce identical outputs for all inputs.”

Tests are both specification and safety net. Existing tests define the behavior that must be preserved. If the tests pass before refactoring and pass after, behavior is preserved. If you lack tests, consider generating them first (see next section) before refactoring. Refactoring without tests is risky, whether the refactoring is human or AI-driven.

Specify the structural goals clearly. Why are you refactoring? To improve readability? To enable future extension? To reduce duplication? To improve performance? The AI needs to understand the goal to make appropriate structural choices. “Refactor to separate data access from business logic” is clearer than “refactor to improve the code.”

Evaluate both behavior and structure. Evaluation has two dimensions: does the behavior remain identical, and does the structure achieve the refactoring goal? Code that preserves behavior but does not improve structure is not a successful refactoring. Code that improves structure but changes behavior is a bug.

Proceed incrementally. Large refactorings are risky. Break them into smaller steps, each with its own ADD cycle. Refactor one function, verify behavior, integrate. Then refactor the next. The incremental approach limits the blast radius if something goes wrong and makes evaluation more tractable.

ADD for Test Generation

Test generation inverts the typical ADD flow. Instead of specifying behavior and generating implementation, you specify implementation and generate tests.

Code under test is the specification. When generating tests, the existing code defines what behavior you are testing. Include the code in your specification. Describe what the code is supposed to do. Identify the critical behaviors that tests should verify.

Specify coverage goals. What should the tests cover? Happy paths, edge cases, error conditions, boundary values? Be explicit about coverage expectations. “Generate tests that cover all branches” or “Generate tests for the error handling paths” gives the AI clear direction.

Specify testing patterns. What testing framework should be used? What assertion style? What test organization? Include examples of existing tests as exemplars. The generated tests should fit with your existing test suite.

Evaluate test quality, not just test existence. Generated tests can be superficial: they exist, they pass, but they do not verify meaningful behavior. Evaluate whether tests actually catch the bugs they should catch. Consider mutation testing: if you introduce bugs into the code, do the tests fail? Tests that pass regardless of bugs are not valuable tests.

Watch for tautological tests. A common AI failure mode in test generation is creating tests that verify the code does what the code does, rather than what the code should do. “Assert that add(2, 3) returns 5” is a meaningful test. “Assert that process(x) returns process(x)” is tautological. Evaluate whether tests encode expected behavior or merely mirror implementation.

Specify negative test cases. AI tends toward positive test cases: inputs that work, paths that succeed. Explicitly specify negative test cases in your specification: invalid inputs, error conditions, boundary violations. “Generate tests that verify the function rejects invalid email formats” is more likely to produce valuable negative tests than a generic “generate tests” request.

Generate tests before refactoring. If you plan to refactor code that lacks tests, generate tests first. The tests capture current behavior. Then refactor. If tests pass after refactoring, behavior is preserved. This test-first-refactor-second approach reduces refactoring risk.

ADD for Documentation

Documentation is another context where ADD inverts: code is the input, human-readable text is the output.

Code is the specification. When generating documentation, the code defines what you are documenting. Include the relevant code in your specification. Describe what kind of documentation you need: API reference, tutorial, architectural overview, inline comments.

Specify audience and purpose. Who will read this documentation? New developers onboarding to the project? API consumers integrating with your service? Operations teams deploying and monitoring? The audience shapes the content, level of detail, and assumed knowledge.

Evaluate accuracy above all. AI-generated documentation can be fluent, well-structured, and completely wrong. Evaluation must verify that the documentation accurately describes what the code does. This requires reading both the documentation and the code and confirming they match. Do not trust that the AI understood the code correctly.

Watch for confident hallucination. AI models can generate documentation that describes features the code does not have or behaviors that do not occur. This is particularly common when the code is complex or when the AI fills in gaps based on what similar code might do. Verify every claim in the documentation against the actual implementation.

Update documentation atomically with code. When you generate new code, generate or update documentation in the same cycle. Documentation that lags behind code becomes misleading. Integrate documentation updates with code integration.

ADD for Code Review

ADD can also inform how you review code, whether human-written or AI-generated.

Apply evaluation dimensions consistently. The five evaluation dimensions from Post 10 (correctness, fitness, security, performance, maintainability) apply to all code. Use them as a framework for reviewing human-written code, not just AI-generated code. Consistency in review standards improves overall code quality.

Use AI to assist review. You can use ADD cycles to help review complex code. Specify the code under review and ask the AI to identify potential issues, explain complex logic, or suggest improvements. The AI’s output is not authoritative, but it can surface concerns you might miss.

Review AI-generated code with extra attention to AI failure modes. When reviewing code you know was AI-generated, pay particular attention to the common AI failure modes covered in Post 10: plausible but incorrect logic, training data bias, hidden assumptions, confident incorrectness, copy-paste drift. These are more likely in AI-generated code than human-written code.

Share review findings to improve the cycle. When code review catches issues, feed that learning back into the ADD cycle. If review consistently catches certain types of problems, those problems should be addressed earlier through better specifications, more explicit constraints, or updated evaluation checklists.

Consider AI-assisted review. For complex pull requests, you can use ADD cycles to help with review. Generate summaries of changes, identify potential issues, or create review checklists specific to the code being reviewed. AI assistance in code review complements human judgment rather than replacing it.

Document review patterns. Over time, you will notice patterns in what code review catches for AI-generated code versus human-written code. Document these patterns. They inform both your evaluation practices and your specification practices. If reviews consistently catch the same type of issue in AI-generated code, your specifications should prevent that issue from occurring.


Let’s Continue the Conversation

Which context do you work in most often? How has AI assistance worked differently in greenfield versus legacy environments?

Have you used AI for refactoring or test generation? What worked well, and what challenges did you encounter?

Share your experience in the comments. Different contexts surface different challenges, and collective experience helps everyone apply ADD more effectively.

Leave a Reply Cancel reply

You must be logged in to post a comment.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Recent Posts

  • ADD in Context: Greenfield, Legacy, Refactoring, and Testing
  • WebMCP Is Coming: How AI Agents Will Reshape the Web
  • No, Average People Will Not Build Their Own Software With AI
  • Full-Time CTO vs. Fractional: The Real Math Nobody Shows YouThe Math on Hiring a Full-Time CTO (And Why It Rarely Adds Up)
  • Architect or Extinct: Why Software Developers Must Evolve Beyond Writing Code

TOP 3% TALENT

Vetted by Hire me
  • Instagram
  • Facebook
  • GitHub
  • LinkedIn

Recent Comments

  • Prompt Patterns Catalog: Iteration, Verification, and Persona on Prompt Patterns Catalog: Decomposition, Exemplar, Constraint
  • Top AI Code Bugs: Semantic Errors, API Misuse, and Security Risks Unveiled – Trevor Hinson on Code Is for Humans, Not Machines: Why AI Will Not Make Syntax Obsolete
  • ADD: AI-Driven Development Methodology for Modern Engineers on The Future Engineer: What Software Development Looks Like When AI Handles the Code
  • The Future Engineer: Skills for AI-Era Software Development on Contact Me
  • A CTO Would Be Bored by Tuesday - Signal Through the Noise on Contact Me

Archives

  • February 2026
  • January 2026
  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • May 2025
  • April 2025
  • March 2025
  • January 2021
  • April 2015
  • November 2014
  • October 2014
  • June 2014
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • April 2012
  • October 2011
  • September 2011
  • June 2011
  • December 2010

Categories

  • ADD Methodology
  • AI
  • AI development
  • AI-Driven Development
  • AngularJS
  • Artificial Intelligence
  • blockchain
  • Business Strategy
  • Career Development
  • Code Integration
  • Code Review
  • development
  • Development Methodology
  • ebook
  • Introduction
  • leadership
  • Legacy Code
  • mac os
  • personal
  • personal development
  • presentation
  • productivity
  • Quality Assurance
  • Refactoring
  • Requirements
  • ruby
  • ruby on rails
  • sinatra
  • Software Development
  • Software Engineering
  • Software Testing
  • Specification
  • start
  • startup
  • success
  • Uncategorized

Meta

  • Log in
  • Entries feed
  • Comments feed
  • WordPress.org
© 2026 Signal Through the Noise | Powered by Superbs Personal Blog theme