Kent Beck
2005-01-19 08:28:44 UTC
Write a failing automated test before changing any code. Test-first
programming addresses many problems at once:
* Scope creep-It's easy to get carried away programming and put in code
"just in case." By stating explicitly and objectively what the program is
supposed to do, you give yourself a focus for your coding. If you really
want to put that other code in, write another test after you've made this
one work.
* Coupling and cohesion-If it's hard to write a test, it's a signal that
you have a design problem, not a testing problem. Loosely coupled, highly
cohesive code is easy to test.
* Trust-It's hard to trust the author of code that doesn't work. By
writing clean code that works and demonstrating your intentions with
automated tests, you give your teammates a reason to trust you.
* Rhythm-It's easy to get lost for hours when you are coding. When
programming test-first, it's clearer what to do next: either write another
test or make the broken test work. Soon this develops into a natural and
efficient rhythm-test, code, refactor, test, code, refactor.
The XP community hasn't done much exploration of alternatives to tests for
verifying the behavior of the system. Tools like static analysis and model
checking could be used test-first style. You start with a "test" that says,
for example, that there are no deadlocks in the system. After every change,
you verify again that there are no deadlocks. The static analysis tools I've
seen aren't intended to be used this way. They run too slowly to be part of
the minute-by-minute cycle of programming. However, this seems to be merely
a matter of focus, not a fundamental limitation.
Another refinement of test-first programming is continuous testing, first
reported by David Saff and Michael Ernst in "An Experimental Evaluation of
Continuous Testing During Development," and also explored in Erich Gamma's
and my book Contributing to Eclipse. In continuous testing the tests are run
on every program change, much as an incremental compiler is run on every
change to the source code. Test failures are reported in the same format as
compiler errors. Continuous testing reduces the time to fix errors by
reducing the time to discover them. The tests have to run quickly, however.
The tests you write while coding test-first have the limitation that they
take a microview of the program: do these two objects work well together? As
your experience grows, you'll be able to squeeze more and more reassurance
into these tests. Because of their limited scope, these tests tend to run
very fast. You can run thousands of them as part of the Ten-Minute Build.
programming addresses many problems at once:
* Scope creep-It's easy to get carried away programming and put in code
"just in case." By stating explicitly and objectively what the program is
supposed to do, you give yourself a focus for your coding. If you really
want to put that other code in, write another test after you've made this
one work.
* Coupling and cohesion-If it's hard to write a test, it's a signal that
you have a design problem, not a testing problem. Loosely coupled, highly
cohesive code is easy to test.
* Trust-It's hard to trust the author of code that doesn't work. By
writing clean code that works and demonstrating your intentions with
automated tests, you give your teammates a reason to trust you.
* Rhythm-It's easy to get lost for hours when you are coding. When
programming test-first, it's clearer what to do next: either write another
test or make the broken test work. Soon this develops into a natural and
efficient rhythm-test, code, refactor, test, code, refactor.
The XP community hasn't done much exploration of alternatives to tests for
verifying the behavior of the system. Tools like static analysis and model
checking could be used test-first style. You start with a "test" that says,
for example, that there are no deadlocks in the system. After every change,
you verify again that there are no deadlocks. The static analysis tools I've
seen aren't intended to be used this way. They run too slowly to be part of
the minute-by-minute cycle of programming. However, this seems to be merely
a matter of focus, not a fundamental limitation.
Another refinement of test-first programming is continuous testing, first
reported by David Saff and Michael Ernst in "An Experimental Evaluation of
Continuous Testing During Development," and also explored in Erich Gamma's
and my book Contributing to Eclipse. In continuous testing the tests are run
on every program change, much as an incremental compiler is run on every
change to the source code. Test failures are reported in the same format as
compiler errors. Continuous testing reduces the time to fix errors by
reducing the time to discover them. The tests have to run quickly, however.
The tests you write while coding test-first have the limitation that they
take a microview of the program: do these two objects work well together? As
your experience grows, you'll be able to squeeze more and more reassurance
into these tests. Because of their limited scope, these tests tend to run
very fast. You can run thousands of them as part of the Ten-Minute Build.