Skip to content
Published on

Working with Legacy Code — How to Change Old, Scary, Untested Code Without Fear

Authors

Prologue — You Will Inherit Legacy Code

New projects do not stay new for long. Six months in, yesterday's new code is today's legacy. A year in, there is a module nobody knows the inside of precisely. Three years in, the phrase "don't touch that one" shows up in meetings.

Everyone defines legacy code differently. Some say "old code," some say "code I didn't write." Michael Feathers' definition is the most useful: legacy code is code without tests. Without tests, there is no way to know what broke when you changed something. So touching it is scary. Because it is scary you change as little as possible, and because you change as little as possible the code gets stranger and stranger.

Add one more, more honest definition: legacy code is code you are afraid to change. The fear is the core. Old or new, tested or not, if your hands shake when you change it, it is legacy to you.

This post is the craft of handling that fear. Not beating the fear with "courage." You turn the fear into procedure. When there is a safe order for changing things, the code can be scary and your hands still will not shake. You pin current behavior with characterization tests, find seams to insert tests, isolate risk with sprout and wrap, and replace whole systems gradually with the strangler fig. And you see where AI agents are an accelerant and where they are a landmine in this work.

One line: the secret to changing legacy code safely is not courage but procedure. Pin the current behavior before you touch it — even if that behavior is a bug.


Chapter 1 · The Legacy Code Dilemma — Chicken and Egg

What do you need to change legacy code safely? Tests. You have to be able to compare before and after to say "nothing broke."

But what do you need to add tests? You have to change the code. You split functions to make them testable, make dependencies injectable, and rip out global state.

Here is the dilemma.

What you wantWhat it requires firstBut
Change the code safelyTests must existThere are no tests
Add testsChange the code to be testableYou do not know that change is safe
Confirm the change is safeTests must existBack to start

This is the essential difficulty of legacy work. Chicken and egg. Without tests you cannot change safely, and without changing you cannot add tests.

There are two ways out.

First, accept a minimal risky change. Make the change needed to insert a test as small, mechanical, and reversible as possible. "Extract method," "variable to parameter," "take a dependency in the constructor" — your IDE does these automatically, and they barely change behavior. The risk is not zero, but it is small enough.

Second, find points you can test without changing. That is a seam. Without rewriting the code, you insert a test through a boundary that already exists. That is the subject of Chapters 3 and 4.

The key is order. First, throw up a safety net that pins current behavior (characterization tests). Then refactor the code under that net. If the net turns red, you stop. Not a reckless "fix it and see," but checking your footing at every step.

Legacy work is like climbing. You move one hand only after the other is secured. You never let go with both hands at once.


Chapter 2 · Characterization Tests — Pin the Current Behavior

The characterization test is the starting point of legacy work. The name holds the core. This test does not verify what the code should do. It pins what the code currently does.

The difference matters. A normal test starts from a spec — "this function should return X." A characterization test starts where there is no spec, or the spec cannot be trusted. Instead of a spec, it accepts the current behavior itself as the truth.

Why "current behavior," even if it is buggy

The current behavior of legacy code has bugs mixed in. And yet the characterization test pins those bugs too. It sounds odd, but there is a reason.

Right now you are trying to understand this code's behavior, not fix it. Which behavior is the intended feature and which is a bug — you cannot tell yet. Someone may be depending on that "bug" (Hyrum's Law). So pin all of it for now. Then, once you have confirmed which behavior is a bug, you change that test deliberately — that is a clear decision, not something you broke unknowingly.

Normal test:          spec -> write test -> make code pass
Characterization:     run code -> observe output -> pin that output as expected

The procedure for writing a characterization test

  1. Get the code onto a test harness. Build the minimal environment that can call the function. If you can give it input and receive output, that is enough.
  2. Write an assertion with an obviously wrong expected value. Something like assertEquals("THIS_IS_WRONG", result).
  3. Run the test and look at the failure message. The test runner tells you "expected: THIS_IS_WRONG, actual: 42." That 42 is the code's current behavior.
  4. Pin the actual output as the expected value. assertEquals(42, result). Now the test passes.
  5. Repeat. Pin behavior with varied inputs — normal values, boundary values, empty, 0, negative, null.
// before — a legacy function with no spec for what it does
function computeDiscount(order) {
  let d = 0
  if (order.total > 100) d = order.total * 0.1
  if (order.coupon === 'VIP') d += 5
  if (order.items.length > 10) d = Math.min(d, 20)
  return Math.round(d * 100) / 100
}

// after — a characterization test that pins the current behavior
test('characterizes computeDiscount', () => {
  // not a spec, a record of "this is how it currently behaves"
  expect(computeDiscount({ total: 50, coupon: null, items: [] })).toBe(0)
  expect(computeDiscount({ total: 150, coupon: null, items: [] })).toBe(15)
  expect(computeDiscount({ total: 150, coupon: 'VIP', items: [] })).toBe(20)
  // this one looks like a bug — pin it for now, change it deliberately later
  expect(computeDiscount({ total: 50, coupon: 'VIP', items: [] })).toBe(5)
})

Now you can refactor computeDiscount under this safety net. Fix variable names, split the function, clean up the conditions. As long as the test stays green, the behavior is unchanged. If it turns red — something changed, and if it was not intended, you revert.

Golden Master — when the output is huge

If a function produces not a simple value but a huge output (an HTML page, a JSON document, a log file, an image), save the whole output to a file. This is the golden master, or snapshot test. Feed 100 inputs, pin the 100 outputs to files. After refactoring, generate the 100 again and diff against the files. If even one character differs, the test catches it.

The strength of the golden master is that it can throw a safety net over code whose behavior you do not understand one bit. The weakness is that when a diff appears, a human has to judge whether that change was intended.


Chapter 3 · Finding Seams — Places to Insert Tests Without Rewriting

The seam is Feathers' most important concept. A seam is a place where you can change behavior without editing the code in that spot.

The real reason legacy code is hard to test is not that the logic is complex. It is the dependencies. In the middle of a function it connects to a database directly, reads the current time directly, calls the network directly, touches a global singleton directly. You cannot stand all of that up for real in a test.

A seam is a channel through which you can swap that dependency for a fake at test time. With a seam, without touching the function body, you can push a fake through that channel and test.

Kinds of seams

Seam kindHow you swapExample
Object seamInject a fake object implementing the interface/classDependency injection via constructor or setter
Parameter seamMake the function take the dependency as an argumentA clock argument instead of now()
Function/module seamReplace an import or function reference in the testModule mocking, function pointer
Subclass seamTest with a subclass that overrides the risky methodTest-only subclass
Build seamWire a different implementation at build/link timeLink a different file in the test build

The most common surgery — pulling a dependency out into a parameter

The most frequent seam work in legacy code is "pulling a dependency buried inside a function out into a parameter." It is a mechanical change that does not alter behavior, and most IDEs do it automatically.

// before — the time dependency is buried inside the function. Untestable.
function isSubscriptionExpired(subscription) {
  const now = Date.now()                 // hidden dependency
  return subscription.expiresAt < now
}

// after — now pulled out into a parameter. This is the parameter seam.
function isSubscriptionExpired(subscription, now = Date.now()) {
  return subscription.expiresAt < now
}

// now the test can control time — the function body is unchanged
test('expired when expiresAt is in the past', () => {
  const sub = { expiresAt: 1000 }
  expect(isSubscriptionExpired(sub, 2000)).toBe(true)   // inject now=2000
  expect(isSubscriptionExpired(sub, 500)).toBe(false)
})

The existing callers do not change one character — because now has a default value. But the test now controls time freely. That is the power of a seam. The call site stays the same; only the test site opens up.

Not being able to test legacy code is almost always a dependency problem. Finding seams is the same as finding "which dependency to fake, and how."


Chapter 4 · The Boy Scout Rule — Incremental Improvement

You cannot make an entire legacy codebase clean in one shot. There is no time, and usually no business reason to. So you need the principle of incremental improvement.

The Boy Scout Rule: leave the campground a little cleaner than you found it. Applied to code — when you have a reason to touch a file, you make its surroundings a little better while you are doing that work, and you leave.

"A little" is the core. Do not refactor an entire module just because you went in to fix a bug. When that change gets large, review gets hard, the bug fix and the refactor get mixed into one commit, and when something breaks it is hard to tell what caused it.

ScopeThe right amountWay too much
NamesClean up 1-2 vague variable names in the function you touchedBulk-rename variables across the whole file
StructureExtract one chunk from the function you just touchedRedesign the entire class hierarchy
TestsAdd a characterization test for the bug you fixed this timeDo test coverage work for the whole module
Dead codeDelete one function that is obviously never calledMass-delete code that "seems unused"

Two key rules. First, separate the bug-fix commit from the refactor commit. A reviewer should be able to see "this is a behavior-changing change, this is a non-behavior-changing change" separately. Second, only refactor under a safety net. If the code you want to clean up has no characterization test, throw up the test first.

The cumulative effect of incremental improvement is large. 5 percent at a time, the places people touch often get better. That the often-touched places get better matters — code nobody touches barely needs to be clean. Where change happens often, that is where the investment is worth it.


Chapter 5 · Sprout Method — Start New Code in a Clean Place

You have to add a feature, but the spot is the middle of a 200-line untestable function. What do you do?

Bad answer: cram the new logic into the middle of those 200 lines too. The function becomes 220 lines and even more untestable.

Good answer: sprout method. You do not write the new logic inside the existing function. You make it a separate new method — and you write that new method cleanly, with tests — and the existing function only calls that new method.

// before — an order processing function. Long, hard to test.
function processOrder(order) {
  // ... 150 lines of validation, inventory, payment ...

  // you have to add a "premium customer loyalty points" feature here
  // bad choice: cram 20 more lines into this spot

  // ... the remaining 50 lines ...
}

// after — the new logic goes into a sprout method. Clean, with tests.
function calculateLoyaltyPoints(order) {        // sprout: new method, testable
  if (!order.customer.isPremium) return 0
  const base = Math.floor(order.total / 10)
  return order.hasPromoCode ? base * 2 : base
}

function processOrder(order) {
  // ... the 150 lines stay as is — you do not touch them ...

  const points = calculateLoyaltyPoints(order)  // the existing function only calls
  order.customer.points += points

  // ... the remaining 50 lines stay as is too ...
}

// the sprout method has tests from the start
test('premium customer with promo code earns double points', () => {
  const order = { total: 100, hasPromoCode: true, customer: { isPremium: true } }
  expect(calculateLoyaltyPoints(order)).toBe(20)
})

There are three key gains. First, the new code is 100 percent tested — because you wrote it fresh in a clean place. Second, you barely touch the existing 200 lines — you added one line that calls the sprout method. The risk is isolated to that one line. Third, the legacy function shrinks bit by bit — new features pile up outside the function, not inside it.

The sprout method is a compromise: "I cannot clean up the legacy code right now, but at least I will not make it worse." And over time that compromise naturally makes the legacy function smaller.


Chapter 6 · Sprout Class — When a Sprout Method Is Not Enough

You use a sprout method when the new logic fits in a single method. But if the new feature is bigger than that — it has state, several intertwined behaviors, and needs collaborators of its own — a single method is not enough.

This is where you use a sprout class. You make the new responsibility a whole new class. The legacy class only instantiates and delegates to that new class.

Another strong reason to use a sprout class: when the legacy class itself will not go onto a test harness. If the legacy class's constructor connects to a database, or drags in a huge dependency graph, even a sprout method inside it is hard to test. The new class is outside that swamp, so you can test it freely.

Sprout method vs sprout class — when to use which

Use a sprout method:
  - the new logic fits cleanly in one method
  - you can get the legacy class onto a test harness (even if with effort)
  - the new logic barely uses the legacy class's state

Use a sprout class:
  - the new responsibility is state + several methods
  - you simply cannot get the legacy class onto a test harness
  - you want to test the new feature independently and reuse it
  - the new responsibility is conceptually separate from the legacy class

The risk of a sprout class is that there is one more class in the system. Used badly, classes explode for every tiny responsibility. So the criterion is "is this responsibility really an independent concept." If it is an independent concept, a sprout class actually improves the design — you have peeled one responsibility off a giant legacy class and given it a name.

The sprout method and the sprout class are two sizes of the same philosophy. Do not build new code inside the legacy swamp; build it on the clean ground next to it and just lay a bridge.


Chapter 7 · Wrap Method — Add Behavior Without Touching the Existing Behavior

Sprout is for adding "completely new logic." But there is another situation. At the exact moment the existing behavior happens, you want to do something more. For example, "I want to leave an audit log every time a payment happens" — you do not want to change the payment logic itself.

This is where you use a wrap method. You rename the existing method and set it aside, and make a new method with the original name. The new method calls the old method, and adds new behavior before or after it.

// before — payment logic. Callers are all over the place. You do not want to touch the body.
class PaymentService {
  processPayment(order) {
    const result = this.gateway.charge(order.total, order.card)
    order.status = result.success ? 'paid' : 'failed'
    return result
  }
}

// after — wrap method. The original method is set aside with just a rename,
// and a "wrapping" method takes the original name.
class PaymentService {
  // the original body — not one character changed. Just made private.
  _processPaymentCore(order) {
    const result = this.gateway.charge(order.total, order.card)
    order.status = result.success ? 'paid' : 'failed'
    return result
  }

  // the new method takes the original name — callers know nothing
  processPayment(order) {
    this.auditLog.record('payment_attempt', order.id)   // added behavior (before)
    const result = this._processPaymentCore(order)      // call the original behavior as is
    this.auditLog.record('payment_result', order.id, result.success) // added behavior (after)
    return result
  }
}

The core of the wrap method is that you do not touch the original method body by one character. _processPaymentCore is the old code exactly as it was — so there is no risk the old behavior changed. The new behavior is all in the wrapping layer, and that wrapping layer can be tested separately.

A close relative is the wrap class — the decorator pattern. When you want to wrap not one method but a whole interface, you make a class that implements the same interface while holding the original object inside. Combine sprout and wrap and you have a complete tool set for adding features and changing behavior while barely touching the legacy code.

TechniqueWhenThe legacy code is
Sprout methodAdd new logic, method-sizedOne call line added
Sprout classAdd new responsibility, class-sizedOne delegation line added
Wrap methodAdd behavior at the moment of existing behaviorRenamed only, body preserved
Wrap classAdd behavior to a whole interfaceNot touched at all

Chapter 8 · The Strangler Fig Pattern — Grow the New System Around the Old

So far the techniques have been at the function and class level. But what if the legacy is a whole system — you have to replace one monolith, one aged service, wholesale? What do you do?

The temptation of the big-bang rewrite comes from here. "Let's write the whole thing fresh and swap it in one day." This almost always fails (more in Chapter 10). The alternative is the strangler fig pattern.

Martin Fowler took the name from the strangler fig of the rainforest. This tree sprouts on top of a host tree, sends roots down, and slowly wraps the host. Decades later the host tree dies and disappears, and the fig stands alone in the host's exact shape. Not a sudden swap, but gradual replacement.

Applied to software, it goes like this.

Strangler fig — the steps

1. Stand an intercepting layer (facade/proxy) in front of the old system.
   All traffic goes through this layer. At first it routes 100 percent to the old system.

2. Pick one feature, and implement only that feature in the new system.
   The intercepting layer routes only that feature's traffic to the new system.

3. Verify. Does the new path produce the same result as the old path?
   If needed, send to both for a while and compare results (shadowing).
   If something goes wrong, route back to the old system immediately — rollback is one line.

4. Repeat with the next feature. The old system's responsibilities move to the new
   system one piece at a time.

5. When traffic to the old system reaches 0 — delete it.
   When the intercepting layer has nothing left to route, take it out too.

The advantages of the strangler fig are clear when compared to the big-bang rewrite.

ItemBig-bang rewriteStrangler fig
Risk exposureAll on one launch daySpread a little per feature
FeedbackOnly after it is all doneImmediately from the first feature
RollbackEffectively impossibleOne routing line
Business value0 until it is doneStarts from the first piece
Old/new coexistingDoes not (so it is risky)Does (so it is safe)
If the schedule slipsEverything is at riskThe old system keeps running

Standing the intercepting layer is the first and most important step. Without that layer, gradual routing is impossible. For an HTTP service it is a reverse proxy or API gateway, for a library a facade class, for a database an abstraction layer. Once you make all traffic go through one point, you can turn one piece at a time at that point.

The core of the strangler fig is accepting that "the old and the new coexist for a while." Coexistence looks messy, but that messiness is the safety net — because you can go back to the old at any time.


Chapter 9 · Reading Unfamiliar Code Fast — Entry Points, Call Graph, Run It, Logging

To change legacy code you first have to read it. But you cannot read a 100,000-line system from start to finish. Do not try to read all of it. Trace only the path you need.

Start from the entry point

Code starts somewhere — main, an HTTP route handler, an event listener, a cron job, a CLI command parser. First decide how the behavior you want to change appears to the user, and find that behavior's entry point. From there you follow inward layer by layer.

Follow the call graph, ignore the side roads

Start from the entry point and follow the calls, but follow only the calls related to the behavior you want to change now. Ignore side roads like logging, metrics, and config loading for now. Your IDE's "view call hierarchy," "go to definition," and "find usages" are the key tools. You follow the call graph drawing it on paper or screen, not in your head.

Just run it

Reading alone piles up guesses. Run it and confirm. Set a debugger on the entry point and step through, and which branch is actually taken, what comes into a variable — that becomes fact, not a guess. Stepping through with a debugger once is often faster than glaring at the code for five minutes.

Add logging to understand

In an environment where you cannot use a debugger, drop temporary logs at key points in the call graph. "Reached here," "this variable's value is this" — logs to grasp the flow once. Once you understand the flow, you delete them. This is not debugging, it is drawing a map.

Characterization tests are a learning tool

The characterization tests from Chapter 2 are a safety net, but also a learning tool. As you pin outputs while varying inputs, you learn by hand how the code reacts to which input. The question "this input gives this? why?" becomes the guide for reading the code.

SituationFast-reading tool
You do not know where the behavior startsStart with the entry-point list — routes, main, listeners
You do not know where a function leadsIDE call hierarchy, go to definition
You do not know which branch is actually takenStep from the entry point with a debugger
A flow that only reproduces in productionTemporary logs at key points
You do not know the input-output relationshipObserve with characterization tests while varying inputs

Chapter 10 · Rewrite vs Refactor — The Rewrite Trap

A thought every engineer facing legacy code has at least once: "it would be faster to write this fresh than to fix it." Sometimes that is right. But most of the time it is a trap. This trap has a name — the rewrite trap.

Why a rewrite almost always takes longer

Legacy code is ugly. So it is easy to underestimate the value held inside it. But in every corner of that ugly code, bug fixes and edge-case handling discovered over years are embedded. Each "weird if statement" was usually added by someone who suffered an outage at 3 a.m. Write it fresh and all of that knowledge is gone. And you will rediscover the same edge cases in the same order — this time in production.

On top of that, the old system does not stop while you rewrite. New features go into the old system and bugs get fixed. The new system chases a moving target. The moment you think you have caught up, the target has moved again.

Rewrite vs refactor — the criteria

SignalRefactor is rightA rewrite is worth considering
Does the code workIt works, you are just afraid to change itThe core feature is actually broken
Domain knowledgeIt lives only in the code, not docs or peopleThe domain is simple and well understood
Tech stackOld but still supportedSecurity patches have stopped, you cannot hire for it
Incremental pathYou can find seamsA structure where you cannot even stand a strangler layer
SizeLargeSmall enough to rewrite in one sprint
Change frequencyChanges often (so the improvement value is high)Barely changes (leaving it alone is fine)

Two key insights. First, the safe form of what you call a "rewrite" is exactly the strangler fig. If you want to write fresh, do not write it as a big bang; grow it gradually around the old system. Then it is not a rewrite but "gradual replacement," and it is not a trap.

Second, the urge to rewrite usually comes from not understanding the code. Once you have read the code enough with the methods of Chapter 9 and pinned the behavior with the characterization tests of Chapter 2, you often find that the code you said "must be rewritten" actually just needed a few refactors. Before you decide to rewrite, understand it first.


Chapter 11 · Legacy Code in the AI Era — The Agent Is an Accelerant and a Landmine

AI coding agents have changed both sides of legacy work at once. Used well, they shrink the most tedious parts to minutes; used badly, they confidently break code they do not understand.

What agents are good at

TaskWhy the agent is strong
Mass-generating characterization testsMaking varied inputs and pinning outputs is tedious but mechanical — the agent is fast
Tracing the call graphIt quickly scans "where is this called" in a huge codebase
Finding seam candidatesIt knows the pattern of finding "this function's hidden dependency" and pulling it into a parameter
Summarizing unfamiliar codeIt quickly explains the entry points and flow of a 100,000-line module
Mechanical refactoringIt safely does behavior-preserving changes like extract method and rename

Characterization test generation in particular is the agent's killer use case. The work a human does tediously and stops after writing only a few, the agent fills quickly with dozens of input cases. The safety net gets thicker.

What agents are dangerous at

The problem is that the agent speaks of what it does not understand as if it understands it. The "weird if statement" of legacy code is usually important edge-case handling. The agent judges it as "code that looks unnecessary" and confidently deletes it. Why that if statement is there is written nowhere in the code — it is only in an outage retro from five years ago.

AI agents and legacy code — safety rules

Safety net first:
  - "Before changing the code, first pin the current behavior with characterization tests"
  - Have a human review the agent's characterization tests — do they really catch the current behavior?
  - If there is a golden master, always diff before and after the agent's change

Control the size and kind of the change:
  - State explicitly: "Do not change behavior. Behavior-preserving refactoring only."
  - One thing at a time — "Do not mix refactoring and feature addition in the same change"
  - Before deleting "code that looks weird," make it explain "why it is there" first

Force understanding:
  - When the agent says "this code is unnecessary" — check git blame and the related issue
  - Do large structural changes with the strangler fig — do not make the agent do a big-bang rewrite
  - The agent's "this is how it works" explanation is a hypothesis until verified

The key is that the procedure from Chapter 1 applies the same to humans and to agents. Safety net first (characterization tests), then small behavior-preserving changes, verify at every step. The agent makes the tedious steps of this procedure — especially test generation and call-graph tracing — dramatically faster. But the final judgment of "is this change safe" is still a human's. The agent's confidence is not evidence of correctness.

The agent does the work of throwing up a safety net over legacy code 10 times faster. But it also does the work of changing legacy code without a safety net 10 times faster. Which one you set it to is up to you.


Epilogue — Checklist and Anti-Patterns

The core of the craft of handling legacy code is one sentence. Pin the current behavior before you change it. Throw up a safety net with characterization tests, insert tests with seams, isolate risk with sprout and wrap, and replace the system gradually with the strangler fig. Not beating the fear with courage, but turning it into procedure.

Legacy code change checklist

  1. Did you say in one sentence what you are changing? — Is the scope clear, or vague like "clean up this module"?
  2. Did you pin the current behavior? — Does the code you are touching have characterization tests (or a golden master)?
  3. Did you pin that behavior even if it is a bug? — Right now the goal is understanding. The bug fix comes later, deliberately.
  4. Did you find a seam? — Which dependency, with which kind of seam, will you fake?
  5. Is the change for testing small and mechanical? — Extract method, add parameter — at a level the IDE can do?
  6. Did you start new code in a clean place? — With a sprout method/class, outside the legacy swamp?
  7. When adding behavior to existing behavior, did you wrap? — Did you not touch the original body by one character?
  8. Did you separate the refactor commit from the behavior-change commit? — Can a reviewer see the two separately?
  9. If it is a system replacement, is it a strangler? — Did you stand an intercepting layer, is rollback one line?
  10. Did you understand it enough before deciding to rewrite? — Are you not mistaking ugliness for worthlessness?
  11. Is the safety net green at every step? — Did you let go with the other hand only after one was secured?
  12. If you set an agent to it — Safety net first, behavior-preserving only, make it explain "why it is there" first.

Anti-patterns

Anti-patternWhy it is badInstead
"Just fix it" without testsYou cannot know what brokeSafety net first, with characterization tests
Find a bug and immediately pin it "fixed"You changed it without knowing if it was intendedPin it first, the bug fix is a separate deliberate decision
Cram new logic into the middle of a 200-line functionThe function gets even more untestableSprout method — new code in a clean place
Refactor the whole module while fixing a bugUnreviewable, cause untraceableBoy Scout Rule — "just a little"
Big-bang rewriteDomain knowledge lost, rollback impossible, value 0Strangler fig — gradual replacement
"It's ugly, let's write it fresh"Confusing ugliness with worthlessnessUnderstand first — usually a refactor is enough
Edit the old method body directly to add behaviorRisk the old behavior breaksWrap method — preserve the body
Read 100,000 lines from start to finishWasted time, lost on side roadsFrom the entry point, only the path you need
Let go with both hands at onceIf it breaks you do not know what caused itOne hand secured, then the other — verify at every step
Trust the agent's "looks unnecessary" as isYou delete edge-case handlingCheck git blame and the issue, explain the reason first

Next post teaser

The next post is "Using Test Doubles Right — Mocks, Stubs, Fakes, and the Trap of Over-Mocking." In this post we said many times that a seam "swaps a dependency for a fake," but that fake has kinds too, and used badly the test sticks to the implementation and actually blocks refactoring. The difference between a stub, a mock, and a fake, what to mock and what not to mock, the signs that mocking has gone too far, and a practical take on the "London school vs classicist" debate. Now that you can throw up a safety net, next is how to see whether that safety net is really safe.