Field Notes 004 · April 2026

Logic Drift

The code works. It passes every test. It ships on time. And it still violates three business rules that nobody wrote down.

I call this logic drift — the slow, invisible divergence between what the code does and what the business intended it to do. It is not a bug. Bugs break things visibly. Logic drift breaks things quietly. You do not find it in your error logs. You find it in your quarterly review when someone asks why the numbers do not match the plan.

I have seen this pattern destroy more value than any technical failure. A recommendation engine that optimized for engagement instead of margin. A pricing algorithm that honored discount rules the business had retired two quarters ago. An automated review system that approved every edge case the policy team specifically wanted flagged. All of these systems worked. None of them were correct.

Where it starts

Logic drift starts at the handoff. Someone describes what the system should do. Someone else translates that into a spec. A third person translates the spec into code. At every handoff, fidelity drops. Context evaporates. The business rule "never discount below twenty percent margin" becomes a variable called min_discount that nobody remembers the origin of six months later.

With AI-generated code this problem accelerates. An agent can produce hundreds of lines in seconds. It can build an entire pipeline in a single session. But it cannot know that the business changed its margin floor last Tuesday, or that the compliance team added a new data residency requirement that is not in any codebase yet. The agent builds what you told it to build. If what you told it was incomplete, the output will be precisely, confidently wrong.

This is not a criticism of the tooling. It is a criticism of the process. The agent is doing exactly what it was asked to do. The failure is upstream — in the gap between business intent and technical specification.

The traceability problem

In traditional software development, you can usually trace a piece of logic back to a requirement. A ticket, a design doc, a conversation in Slack. The chain is messy but it exists. With AI-assisted development at speed, that chain is breaking.

Teams are shipping features faster than they can document the intent behind them. The spec says "build an automated approval workflow." The agent builds it. But nobody recorded the seven edge cases the product manager mentioned in a call that never got transcribed. Nobody wrote down which approval thresholds were hard business rules and which were soft guidelines. The code works. The intent is lost.

Six months later, someone needs to modify the workflow. They read the code. The code tells them what it does, not why. They make a change that is technically sound and business-wrong. Another layer of drift.

What verification actually means

Most teams think verification means testing. Run the unit tests, run the integration tests, check the coverage number, ship it. That catches bugs. It does not catch logic drift. You cannot write a test for a business rule that was never formalized.

Real verification means closing the loop between intent and implementation. It means someone who understands the business rules — not just the code — reviews what was built and confirms that the behavior matches the intent. Not the spec. The intent. Those are different things. The spec is a lossy compression of the intent.

This is what gates four through six of the Intent Stack are designed to enforce. Gate four asks whether the spec is complete — whether every business rule, edge case, and constraint has been captured before a single line of code is generated. Gate five asks whether the spec is AI-ready — structured in a way that an agent can execute without ambiguity. Gate six asks whether what was built actually matches what was specified.

That last gate is the one most teams skip. They ship the feature, check the metrics, and move on. Nobody goes back to the original intent and asks: is this system doing what we actually wanted?

The compound cost

Logic drift compounds. Every feature built on top of a drifted foundation inherits the drift. A pricing engine with a stale margin rule feeds into a reporting dashboard that now shows incorrect profitability. A recommendation system optimizing the wrong metric feeds into a retention model that optimizes for the wrong users. Each layer looks correct in isolation. The system-level behavior diverges further from intent with every addition.

I worked with a team last year that had eighteen months of compounded logic drift in their core transaction pipeline. Every individual component passed its tests. The end-to-end behavior violated four business rules that had been updated over that period but never propagated to the codebase. The cost of unwinding it was larger than the cost of building it in the first place.

That is the real price of skipping verification. Not the cost of a single missed rule. The cost of every decision built on top of that missed rule over time.

Closing the loop

The fix is not more testing. It is structured traceability. Every business rule that matters should exist in a format that is both human-readable and machine-checkable. Every spec should reference the rules it implements. Every review should verify that the implementation honors those references.

This is not bureaucracy. It takes less time than debugging a production incident caused by a stale business rule. It takes less time than the quarterly fire drill where engineering and product spend two weeks figuring out why the system is doing something nobody asked it to do.

The most expensive line of code is the one that works perfectly and does the wrong thing.

Logic drift is not a technology problem. It is a communication problem with technology consequences. The teams that solve it are not the ones with the best test coverage. They are the ones that never let intent leave the room without being written down.