Skip to content

Signal Through the Noise

Honest takes on code, AI, and what actually works

Menu
  • Home
  • My Story
  • Experience
  • Services
  • Contacts
Menu

Specify: The Most Important Skill in AI-Driven Development

Posted on January 28, 2026January 27, 2026 by ivan.turkovic

If you take one thing from this entire series, let it be this: the quality of AI-generated code is bounded by the quality of your specification. No amount of model capability, prompt engineering tricks, or iteration can overcome a vague specification. The ceiling of what AI can produce for you is set by the clarity of what you ask for.

This is why specification is the first phase of the ADD cycle, and why I consider it the most important skill in AI-driven development. Everything else depends on getting this right.

Why Specification Comes First

In unstructured AI usage, specification is often an afterthought. You have a task in mind. You describe it quickly to the AI. You see what comes back. If the output is not quite right, you iterate with follow-up prompts. The specification emerges through conversation rather than preceding it.

This approach feels efficient. Why spend time writing detailed specifications when you can just ask and see what happens? The AI is fast. Iteration is cheap. You can course-correct as you go.

But this efficiency is illusory. Each iteration costs more than it appears. You must read and evaluate output that may be entirely wrong. You must formulate follow-up prompts that address gaps you only discovered by seeing incorrect output. You accumulate context in the conversation that makes it harder to start fresh if needed. And you often accept “good enough” output because you have already invested time iterating and do not want to start over.

More fundamentally, the iterate-until-acceptable approach fails to surface the real problem: you did not know precisely what you wanted. The vagueness was not in your prompt. It was in your thinking. The AI cannot resolve ambiguity you have not resolved yourself.

Specification-first inverts this dynamic. Before you generate anything, you articulate exactly what you need. This forces you to think through requirements, constraints, and context before the AI introduces its own assumptions. It surfaces ambiguity early, when addressing it costs nothing but thought, rather than late, when addressing it costs rework.

The discipline of specification-first is uncomfortable at first. It feels like unnecessary overhead. But teams that adopt this discipline consistently report that they spend less total time on AI-assisted tasks because they spend less time iterating and reworking.

Anatomy of an Effective Specification

An effective specification has three components: requirements, constraints, and context. Each serves a distinct purpose, and omitting any of them produces gaps the AI will fill with assumptions.

Requirements describe what the code must do. These are the functional specifications: inputs, outputs, behaviors, and transformations. Requirements answer the question “what should this code accomplish?”

Effective requirements are specific and testable. “Validate email addresses” is a requirement, but it is not specific enough. Validate according to what standard? What constitutes an invalid email? What should happen when validation fails? Each ambiguity is a place where the AI will make choices you did not authorize.

A better requirement might be: “Accept a string input and determine whether it represents a valid email address according to RFC 5321 syntax rules, including support for quoted local parts and domain literal addresses. Return a ValidationResult object containing a boolean isValid field and, when invalid, an errorCode field indicating the specific validation failure.”

Notice how the better requirement answers questions the vague requirement leaves open. It specifies the standard (RFC 5321), the scope (including quoted local parts and domain literals), the return type (ValidationResult object), and the failure behavior (errorCode for specific failures). The AI now knows what “valid” means in your context.

Constraints describe boundaries the code must respect. These are the rules about how the code should be written, not just what it should do. Constraints answer the question “what restrictions apply?”

Constraints include technology requirements (use this library, target this Python version), performance requirements (must handle N requests per second, must complete in under M milliseconds), security requirements (must sanitize inputs, must not log sensitive data), and stylistic requirements (follow this naming convention, match this existing pattern).

Constraints are often omitted from casual prompts because they feel obvious to the developer. Of course the code should be secure. Of course it should perform well. But “obvious” constraints are not obvious to the AI. It will generate insecure code if security is not specified. It will generate inefficient code if performance is not specified. What you do not state, you leave to chance.

I have seen developers frustrated that AI-generated code used a deprecated library when a better alternative existed. But they never told the AI which libraries to use. I have seen developers annoyed that generated code did not match their formatting conventions. But they never provided those conventions. The AI is not being obtuse. It simply has no access to information you have not provided.

Context describes the environment the code will operate in. This is information about the larger system: existing patterns, interfaces, dependencies, and integration points. Context answers the question “where does this code live?”

Context matters because AI-generated code must fit into an existing system. A function that is correct in isolation may be wrong in context. It may use different error handling than the rest of your codebase. It may duplicate functionality that already exists. It may introduce patterns inconsistent with your architecture.

Providing context means showing the AI what your codebase looks like. Include examples of similar functions. Show the interfaces the new code must implement or consume. Describe the architectural patterns you follow. The more context you provide, the better the AI can generate code that fits.

Effective context provision is a skill that develops with practice. Early on, you may not know what context matters. As you see how omitted context leads to integration problems, you learn to provide more upfront. The goal is to give the AI enough information to write code that looks like it belongs in your system.

The Vague-to-Precise Spectrum

Specifications exist on a spectrum from vague to precise. Understanding where your specification falls on this spectrum helps you recognize when more work is needed before generation.

At the vague end: “Write a function to process user data.”

This specification provides almost nothing. What kind of processing? What user data? What format? What output? The AI must invent answers to every important question. The output might technically satisfy the request while being entirely useless for your actual needs.

Slightly better: “Write a Python function that takes a user object and validates the email field.”

This adds language, subject matter, and a specific operation. But it still leaves critical questions unanswered. What is the structure of the user object? What validation rules apply? What happens on success or failure? The AI still has significant room to make choices you might not want.

Better still: “Write a Python function that takes a User dataclass (defined in models.py with fields: id, email, name, created_at) and validates the email field using the email-validator library. Return True if valid, raise InvalidEmailError with a descriptive message if invalid.”

Now we have concrete types, a specific library, and defined behavior for both success and failure. The AI has much less room for unwanted interpretation.

At the precise end: A specification that includes the function signature, the validation rules with specific edge cases, the error messages to use, examples of valid and invalid inputs, the relationship to existing code, and any performance or security requirements.

Moving along this spectrum requires effort. The more precise your specification, the more thinking you must do before generation. But this thinking is not overhead. It is the actual work of software development. The specification captures decisions that must be made regardless of whether a human or an AI writes the implementation.

Bad Specification vs Good Specification

Let me illustrate with a concrete example. Suppose you need a function to calculate shipping costs.

Bad specification:

“Write a function to calculate shipping costs based on package weight and destination.”

This will produce a function. The function might even work for some cases. But the AI will decide: What weight unit to use. What destinations are valid. How to structure the pricing. What to do with edge cases like zero weight or unknown destinations. What to return and in what format. Whether to include taxes or handling fees.

You will receive code built on invisible assumptions that may not match your needs.

Good specification:

“Write a Python function calculate_shipping_cost with the following specification:

Inputs:

  • weight_kg: float, the package weight in kilograms (must be > 0 and <= 70)
  • destination_zone: str, one of ‘domestic’, ‘north_america’, ‘europe’, ‘asia’, ‘other’

Output:

  • Decimal representing the shipping cost in USD, rounded to 2 decimal places

Pricing rules:

  • Base rate: $5.00 for domestic, $15.00 for north_america, $25.00 for europe, $30.00 for asia, $40.00 for other
  • Per-kg rate: $0.50 for domestic, $2.00 for all international zones
  • Total = base_rate + (weight_kg * per_kg_rate)

Error handling:

  • Raise ValueError with message ‘Weight must be between 0 and 70 kg’ if weight is out of range
  • Raise ValueError with message ‘Unknown destination zone: {zone}’ if destination_zone is not recognized

Additional requirements:

  • Use Decimal for all calculations to avoid floating-point errors
  • Match the pattern used in pricing/domestic.py for consistency
  • Include type hints
  • Do not round intermediate calculations, only the final result”

This specification leaves little room for interpretation. The AI knows exactly what to build. You can evaluate the output against specific criteria. Any deviation from the specification is clearly a bug rather than a reasonable interpretation of ambiguous requirements.

Specification as a Thinking Tool

Writing detailed specifications is not just about communicating with AI. It is a thinking tool that surfaces problems before they become expensive.

When you write a specification, you are forced to answer questions you might otherwise defer. What exactly should happen when the input is invalid? The specification requires an answer. What performance characteristics matter? The specification requires you to decide. How does this component interact with existing code? The specification requires you to examine those interfaces.

Many developers find that the act of writing a specification reveals gaps in their understanding. “I thought I knew what I needed, but when I tried to write it down, I realized I had not decided how to handle X.” This revelation before generation is valuable. Discovering that you do not know what you want is much cheaper before you have generated and partially integrated code based on the wrong assumptions.

Specification also serves as documentation. A well-written specification describes not just what the code does but why certain decisions were made. When you or a teammate revisits the code later, the specification provides context that the implementation alone cannot.

Teams that practice specification-first often find that their specifications become reusable artifacts. A specification for a common pattern can be adapted for similar tasks. The investment in a good specification pays dividends beyond the immediate generation.

Common Specification Failures

Even developers who understand the importance of specification make predictable mistakes. Recognizing these patterns helps you avoid them.

Implicit constraints. You know your codebase uses a particular error handling pattern, so you do not mention it. The AI generates code with different error handling. You think “obviously it should match our patterns,” but you did not say so. Always state constraints explicitly, even when they feel obvious.

Missing edge cases. You specify the happy path thoroughly but neglect to mention what should happen with empty inputs, null values, extremely large inputs, or malformed data. The AI will handle these cases somehow, but probably not the way you want.

Assumed context. You reference “the User model” without providing its definition, assuming the AI knows your codebase. It does not. Include definitions, examples, or explicit descriptions of anything you reference.

Premature implementation details. You specify not just what the code should do but exactly how it should do it, at an algorithmic level. This over-constrains the AI and may produce worse results than letting it choose implementation approaches. Specify outcomes and constraints, not algorithms, unless the algorithm itself is a requirement.

Conflicting requirements. You specify that the code should be both maximally performant and maximally readable, without acknowledging that these goals sometimes conflict. When requirements conflict, specify priorities.

Missing integration points. You specify the function in isolation without describing how it connects to the rest of the system. The AI produces a correct function that does not fit your architecture.

Practical Exercise: Write Specifications for Existing Code

Here is an exercise that will sharpen your specification skills: take a function from your codebase and write a specification that would produce it.

Choose a function of moderate complexity, something that handles a few edge cases and integrates with other parts of your system. Now imagine you needed to regenerate this function using AI. What specification would you write?

As you write, notice what you must include. The obvious functional requirements, yes, but also the subtle constraints: the error handling pattern, the logging approach, the way it interacts with other components. Notice what you might have omitted if you were writing a casual prompt.

Now compare your specification to what you would have written before reading this post. Is it more detailed? More explicit? Does it capture decisions that your casual prompt would have left to chance?

This exercise builds the mental muscle for specification. The more you practice articulating what code needs to do, the more naturally you will write detailed specifications before generation.

From Specification to Generation

A good specification sets the stage for effective generation. When you have clearly articulated what you need, the generation phase becomes more focused and more likely to succeed on the first attempt.

The relationship between specification quality and generation quality is not linear. A specification that is 80% complete does not produce output that is 80% correct. It produces output that may be entirely wrong in the 20% you did not specify, and that wrongness often infects the parts you did specify. The AI makes choices in the unspecified areas, and those choices constrain what is possible in the specified areas.

This is why specification deserves the investment. The return on careful specification is disproportionate to the time invested.

But specification alone is not sufficient. The next phase, Generate, involves managing context, selecting appropriate prompt patterns, and iterating effectively when needed. A good specification makes generation easier, but generation has its own disciplines.

In the next post, I will provide a library of specification templates for common development tasks. These templates encode the lessons from this post into reusable formats that ensure completeness and consistency. They reduce the cognitive load of specification while maintaining the precision that produces good results.

For now, begin practicing specification-first. Before your next AI interaction, pause. Write down what you actually need. Be specific about requirements, constraints, and context. Then generate.

The difference in output quality will make the case better than any argument I can offer.


Let’s Continue the Conversation

Try the exercise: write a specification for existing code in your project. What did you discover about the hidden complexity of what you thought was a simple function?

What specification failures have you encountered? What happened when the AI filled gaps you did not know existed?

Share your experiences in the comments. Specification is a skill that improves with practice, and learning from each other’s discoveries accelerates that improvement.

Leave a Reply Cancel reply

You must be logged in to post a comment.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

  • Instagram
  • Facebook
  • GitHub
  • LinkedIn

Recent Posts

  • Specify: The Most Important Skill in AI-Driven Development
  • From Waterfall to ADD: Why AI Demands Its Own Methodology
  • The Unstructured AI Problem: Why Most Teams Are Using AI Wrong
  • ADD: AI-Driven Development as a Methodology for the Future Engineer
  • The Future Engineer: What Software Development Looks Like When AI Handles the Code

Recent Comments

  • ADD: AI-Driven Development Methodology for Modern Engineers on The Future Engineer: What Software Development Looks Like When AI Handles the Code
  • The Future Engineer: Skills for AI-Era Software Development on Contact Me
  • A CTO Would Be Bored by Tuesday - Signal Through the Noise on Contact Me
  • What I Wrote About in 2025 - Ivan Turkovic on From Intentions to Impact: Your 2025 Strategy Guide (Part 2)
  • From Intentions to Impact: Your 2025 Strategy Guide (Part 2) - Ivan Turkovic on Stop Procrastinating in 2025: Part 1 – Building Your Foundation Before New Year’s Resolutions

Archives

  • January 2026
  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • May 2025
  • April 2025
  • March 2025
  • January 2021
  • April 2015
  • November 2014
  • October 2014
  • June 2014
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • April 2012
  • October 2011
  • September 2011
  • June 2011
  • December 2010

Categories

  • ADD Methodology
  • AI
  • AI development
  • AI-Driven Development
  • AngularJS
  • Artificial Intelligence
  • blockchain
  • Business Strategy
  • Career Development
  • development
  • Development Methodology
  • ebook
  • Introduction
  • leadership
  • mac os
  • personal
  • personal development
  • presentation
  • productivity
  • Requirements
  • ruby
  • ruby on rails
  • sinatra
  • Software Development
  • Software Engineering
  • Specification
  • start
  • startup
  • success
  • Uncategorized

Meta

  • Log in
  • Entries feed
  • Comments feed
  • WordPress.org
© 2026 Signal Through the Noise | Powered by Superbs Personal Blog theme