In his talk Nat expressed surprise that a large number of people using mock objects for testing start by creating unit tests for, and implementations of, domain objects. Whereas the GOOS way is to start by creating end-to-end tests that test execution paths through all the layers of the system (aka "system tests"). If the entry point to the system is in layer 1, then the test will initially mock the objects in layer 2, then replace the mocks by real implementations in layer 2 with mocks in layer 3 and so on.
I share Nat's surprise that this isn't standard practice, but what particularly interests me is the justification for it. I discussed this a bit with Nat, and then some more with Rachel Davies and Willem van den Ende, and here are the thoughts we had.
It seems to me that for Nat and Steve the primary justification is consideration of process. First, I make the assumption that there is a direct correspondence between a set of end-to-end tests and a story: the implementation of a story is driven by the creation of a set of end-to-end tests where the starting points of the tests are events detected by the system-under-test at its boundary. The resulting top-down decomposition and refinement ensures there is a natural order of development that creates exactly the software required to pass the tests and hence implement the story. No extraneous lines of code are created, and the successive refinement makes it clear what you need to do next.
By contrast, starting the implementation of a story at the domain layer requires assumptions about how the triggering events will translate into domain model invocations. Get those assumptions wrong and you find that the code you've written in the domain model isn't a good fit when, eventually, you come to hook it to the system's external interfaces. And you may find you've written code you don't need.
This alone is probably all the justification you need for following the GOOS approach, but I think there are other considerations, the primary one being risk management.
My rule of thumb is that when creating a software system you should tackle the riskiest parts first, or at least as early as is compatible with the overriding need to demonstrate progress. In my experience the major risks to project success are not in the domain model; they are in how all the layers of the system fit together, and in the interactions with external agents, such as users and other systems. Therefore it makes sense to start with end-to-end tests because these tests expose exactly those issues. It's true that the design of the domain model may affect system-level characteristics, such as performance, but even there you are more likely to detect these effects through end-to-end tests than via domain model unit tests.
Given these two compelling justifications for starting with end-to-end tests, why is it that many people apparently don't start there? We came up with two possibilities, although there may be many others:
- Starting with the domain model can provide an illusion of rapid progress. You can show business features working while ignoring the realities of the larger system environment. Clearly, this is not normally an approach that addresses the biggest risks first. But it's an easy option and attractive when you're under pressure.
- For some reason the system environment is not available to you; perhaps, for example, the team creating the infrastructure is late delivering. So rather than taking the correct – and brave – option of loudly declaring progress on your project to be blocked, you restrict yourself to creating those parts of the system that are within your control.