Discussion:
Practice: Ten-Minute Build
Kent Beck
2005-01-03 23:22:35 UTC
Permalink
Automatically build the whole system and run all of the tests in ten
minutes. A build that takes longer than ten minutes will be used much less
often, missing the opportunity for feedback. A shorter build doesn't give
you time to drink your coffee.
Physics has reassuringly concrete natural constants. At sea level on
earth, the force of gravity accelerates objects at 9.8 meters per second per
second. You can count on gravity. Software has few such certainties. The
ten-minute build is as close as we get in software engineering. I've
observed several teams that started with an automated build-and-test process
never letting the process take longer than ten minutes. If it did, someone
optimized it but only until it took ten minutes again.
The ten-minute build is an ideal. What do you do on your way to the ideal?
The statement of the practice gives three clues: *automatically* build the
*whole* system and run *all* of the tests in ten minutes. If your process
isn't automated, that's the first place to start. The you may be able to
build only the part of the system you have changed. Finally, you may be able
to run only tests covering the part of the system at risk because of the
changes you made.
Any guess about what parts of the system *need* to be built and what parts
*need* to be tested introduces the risk of error. If you are wrong, you may
miss unpredictable errors with all their social and economic costs. However,
being able to test some of the system is much better than being able to test
none at all.
Automated builds are much more valuable than build requiring manual
intervention. As the general stress level rises, manual builds tend to be
done less often and less well, resulting in more errors and more stress.
Practices should lower stress. An automated build becomes a stress reliever
at crunch time. "Did we make a mistake? Let's just build and see."
jxg3776
2005-01-04 15:18:44 UTC
Permalink
Post by Kent Beck
Automatically build the whole system and run all of the tests in ten
minutes.
Question: what would you recommend once you are at the stage of
automatically building the whole system and running all of the tests
but it takes much longer than ten minutes to do so?
Example 1: Rebuilding the Linux kernel from sources takes much longer
than ten minutes even without tests.
Example 2: I have another project with a very short build and unit
test time, but the customer tests take longer to run. And that is not
a particularly large project, but there are enough "things for the
customer to test"...
Laurent Bossavit
2005-01-04 17:13:21 UTC
Permalink
Post by jxg3776
Example 1: Rebuilding the Linux kernel from sources takes much longer
than ten minutes even without tests.
What's the "speed of light" for software builds ?

Blurbs for "high performance" compilers used to advertise speeds in
the range of hundreds of thousands of lines of code per minute.
What's a *very* large system - one with about a million lines of
code? Why should that take very much longer to build than ten minutes
then ?

Has anyone run the equivalent of a profiler on a full system's (long)
build procedure and measured where all the time went ? What are the
"low hanging fruit" optimizations ?

Cheers,

-[Laurent]-
The greatest obstacle to transforming the world is that we lack the
clarity and imagination to conceive that it could be different.
Roberto Mangabeira Unger
Luiz Esmiralha
2005-01-04 17:14:01 UTC
Permalink
On Tue, 04 Jan 2005 18:13:21 +0100, Laurent Bossavit
Post by Laurent Bossavit
Cheers,
-[Laurent]-
The greatest obstacle to transforming the world is that we lack the
clarity and imagination to conceive that it could be different.
Roberto Mangabeira Unger
By the way, great quote, Laurent. Mangabeira is one brazilian to be proud of.

Luiz
Brad Appleton
2005-01-06 20:02:33 UTC
Permalink
Post by Laurent Bossavit
Has anyone run the equivalent of a profiler on a full system's (long)
build procedure and measured where all the time went ? What are the
"low hanging fruit" optimizations ?
* I/O overhead to access/open files (and search for them in the include-path or library-path or classpath)
* Target+Dependency calculations/derivations in the build script (Make/Ant file)
* All the other things that happen during a build beside "raw" stuff of compile, link (like running scripts, blocks of shell commands using pipes and shell logic, re-generating dependencies or generating other information)

Those figures from Compiler vendors boasting speeds of
hundreds of thousands of lines of code per minute don't
really accommodate either of the above. If I had a 100K
Line program all in one file, then my build-time might be
comparable to my compile time (most folks don't do that).

Files have to get opened whenever the compiler needs to
compile that file and whenever the file being compiled
has the equivalent of a #include (or the equivalent in
languages like Ada or Java). John Lakos' book "Large Scale
C++ Design" has a section on how the use of redundant
#include guards to avoid even opening a file if it was
already #included (so it wouldn't even look at the #ifdef
MYHDR_H in the myhdr.h file) provided as much as a 25X
speedup in build-cycle time.

Once you start "caching" objects/sources to try and reuse
rather than rebuild them, you spend extra time either
checking the cache (especially if its on a network) and/or
computing dependencies to see whether or not you even need
the file (tho that can sometimes be pure processor time).

Most of the time, build-times are typically dominated by
I/O overhead (and here I mean just the build-time, not the
time to do any testing after the built objects are created)
--
Brad Appleton <brad-***@public.gmane.org> www.bradapp.net
Software CM Patterns (www.scmpatterns.com)
Effective Teamwork, Practical Integration
"And miles to go before I sleep." -- Robert Frost
Keith Ray
2005-01-06 21:44:43 UTC
Permalink
The Metrowerks CodeWarrior compilers used to be one of the fastest of
C/C++ compilers on various platforms (and maybe still are).

It seems like the compiler/IDE caches the source and header files in
memory, and probably caches parse trees as well... if a header file
has #pragma once in it, then it knows not to open and read the file
multiple times during a compile -- redundant include #ifdefs are not
needed.

By the way, keeping sources cached in memory sure helps speed up
multi-file searches -- I wonder if they're indexing the sources as
well.

On the other hand, the compiler/IDE just "unexpectedly" crashed while
I type this. bleh.
Post by Brad Appleton
Post by Laurent Bossavit
Has anyone run the equivalent of a profiler on a full system's (long)
build procedure and measured where all the time went ? What are the
"low hanging fruit" optimizations ?
* I/O overhead to access/open files (and search for them in the include-path or library-path or classpath)
* Target+Dependency calculations/derivations in the build script (Make/Ant file)
* All the other things that happen during a build beside "raw" stuff of compile, link (like running scripts, blocks of shell commands using pipes and shell logic, re-generating dependencies or generating other information)
----
C. Keith Ray
<http://homepage.mac.com/keithray/blog/index.html>
<http://homepage.mac.com/keithray/xpminifaq.html>
<http://homepage.mac.com/keithray/resume2.html>
Brad Appleton
2005-01-07 07:54:42 UTC
Permalink
Post by Keith Ray
It seems like the compiler/IDE caches the source and header files in
memory, and probably caches parse trees as well... if a header file
has #pragma once in it, then it knows not to open and read the file
multiple times during a compile -- redundant include #ifdefs are not
needed.
Nice! Some early C++ compilers (before it was standardized)
used to support "precompiled" header files (I think
Borland C++ was one of the 1st that did this). Even
before that, Many Ada compilers did the equivalent
of this using Ada's tremendously powerful "package"
capabilities (which, if I recall correctly are much more
powerful then Java "packages" at defining and organizing
hierarchical language/compilation units [HLUs]). Many Java
implementations do something similar.

File and network I/O still seem to be the dominant factor
for build-times. There is a new product out called Electric
Cloud (see www.electric-cloud.com) that boasts "lightning
fast" software build infrastructure. The CEO of the company
is John K. Ousterhout (creator of Tcl and Tk).
Post by Keith Ray
By the way, keeping sources cached in memory sure helps speed up
multi-file searches -- I wonder if they're indexing the sources as
well.
Depends again on the implementation of course, but its not
a bad idea. More folks are looking into the compiler as a
tool for creating not merely binary/objects, but an actual
database of information about the codebase. This would
be used not merely for lookup purposes during builds, but
also to query information about structure or semantics of
the codebase (e.g., "give me all occurrences of variable
"abc" at lexical, non-global scope", etc.) as well as
for integration with modeling tools, versioning tools,
debug/trace tools, etc.
--
Brad Appleton <brad-***@public.gmane.org> www.bradapp.net
Software CM Patterns (www.scmpatterns.com)
Effective Teamwork, Practical Integration
"And miles to go before I sleep." -- Robert Frost
Laurent Bossavit
2005-01-07 09:02:49 UTC
Permalink
Brad,
Those figures from Compiler vendors boasting speeds of hundreds of
thousands of lines of code per minute don't really accommodate either
of the above.
Indeed. The Kylix folks get a figure of 4 million lines per minute...
by extrapolating from a figure of under a second to compile a single
60-thousand line file.

If I/O is dominant, should we expect build times to be strictly
proportional to the project's "disk size", given roughly constant
average file sizes ?

Cheers,

-[Laurent]-
Reading this tagline will void its warranty.
Keith Ray
2005-01-07 16:32:27 UTC
Permalink
In C++, header inclusion and parsing is dominant. In the absence of
pre-compiled headers, parsing in-line code in C header files over and
over is a substantial loss of speed... I've improved speed 20% or 30%
in C++ projects by removing in-line code, using the pImpl idiom,
forward declaring classes to avoid #includes, and otherwise decreasing
the "physical coupling" between various c++ files.


On Fri, 07 Jan 2005 10:02:49 +0100, Laurent Bossavit
Post by Laurent Bossavit
If I/O is dominant, should we expect build times to be strictly
proportional to the project's "disk size", given roughly constant
average file sizes ?
Cheers,
-[Laurent]-
Reading this tagline will void its warranty.
Yahoo! Groups Links
--
----
C. Keith Ray
<http://homepage.mac.com/keithray/blog/index.html>
<http://homepage.mac.com/keithray/xpminifaq.html>
<http://homepage.mac.com/keithray/resume2.html>
Jason Yip
2005-01-04 21:27:44 UTC
Permalink
Post by jxg3776
Post by Kent Beck
Automatically build the whole system and run all of the tests in ten
minutes.
Question: what would you recommend once you are at the stage of
automatically building the whole system and running all of the tests
but it takes much longer than ten minutes to do so?
Example 1: Rebuilding the Linux kernel from sources takes much longer
than ten minutes even without tests.
Example 2: I have another project with a very short build and unit
test time, but the customer tests take longer to run. And that is not
a particularly large project, but there are enough "things for the
customer to test"...
First step is probably just profiling the tests and look for
opportunities to optimise. A common issue I've seen is related to
test focus and isolation. Every test does not need to test full
integration.

If you can do it, setup a build grid.
http://confluence.public.thoughtworks.org/display/CC/CruiseControlAtSas

If you still can't get the build under 10 minutes, the pattern seems
to be build staging. Run a subset of the tests before commiting and
run the full set after committing. Note that the subset should not be
just unit tests but unit tests plus a selection of the customer tests.

My own preference is to keep any pre-commit build to under 5 minutes,
though at I'm already getting antsy at 2 minutes. Even a post-commit
build, I'd want to keep under 10 minutes.
Jason Yip
2005-01-04 21:32:34 UTC
Permalink
Hmmm... actually the build staging bit is more about Continuous
Integration than Ten-Minute Build.

The optimisation and build grid comments still hold though.
Post by Jason Yip
Post by jxg3776
Post by Kent Beck
Automatically build the whole system and run all of the tests in ten
minutes.
Question: what would you recommend once you are at the stage of
automatically building the whole system and running all of the tests
but it takes much longer than ten minutes to do so?
Example 1: Rebuilding the Linux kernel from sources takes much longer
than ten minutes even without tests.
Example 2: I have another project with a very short build and unit
test time, but the customer tests take longer to run. And that is not
a particularly large project, but there are enough "things for the
customer to test"...
First step is probably just profiling the tests and look for
opportunities to optimise. A common issue I've seen is related to
test focus and isolation. Every test does not need to test full
integration.
If you can do it, setup a build grid.
http://confluence.public.thoughtworks.org/display/CC/CruiseControlAtSas
If you still can't get the build under 10 minutes, the pattern seems
to be build staging. Run a subset of the tests before commiting and
run the full set after committing. Note that the subset should not be
just unit tests but unit tests plus a selection of the customer tests.
My own preference is to keep any pre-commit build to under 5 minutes,
though at I'm already getting antsy at 2 minutes. Even a post-commit
build, I'd want to keep under 10 minutes.
J. B. Rainsberger
2005-03-14 04:35:13 UTC
Permalink
Post by Jason Yip
Post by jxg3776
Post by Kent Beck
Automatically build the whole system and run all of the tests in ten
minutes.
Question: what would you recommend once you are at the stage of
automatically building the whole system and running all of the tests
but it takes much longer than ten minutes to do so?
Example 1: Rebuilding the Linux kernel from sources takes much longer
than ten minutes even without tests.
Example 2: I have another project with a very short build and unit
test time, but the customer tests take longer to run. And that is not
a particularly large project, but there are enough "things for the
customer to test"...
First step is probably just profiling the tests and look for
opportunities to optimise. A common issue I've seen is related to
test focus and isolation. Every test does not need to test full
integration.
If you can do it, setup a build grid.
http://confluence.public.thoughtworks.org/display/CC/CruiseControlAtSas
If you still can't get the build under 10 minutes, the pattern seems
to be build staging. Run a subset of the tests before commiting and
run the full set after committing. Note that the subset should not be
just unit tests but unit tests plus a selection of the customer tests.
My own preference is to keep any pre-commit build to under 5 minutes,
though at I'm already getting antsy at 2 minutes. Even a post-commit
build, I'd want to keep under 10 minutes.
I like to post the top 20 slow tests every week as a Big Visible Chart.
We then stare at the list and look for patterns, which helps us identify
hotspots that we might need to investigate.

I also have a specific goal: 250 tests per second.

These two things together tend to keep build times acceptably low.
--
J. B. (Joe) Rainsberger
Diaspar Software Services
http://www.diasparsoftware.com
Author, JUnit Recipes: Practical Methods for Programmer Testing
Kent Beck
2005-01-05 22:05:16 UTC
Permalink
That's what I was trying to get at when I wrote:
"The ten-minute build is an ideal. What do you do on your way to the
ideal? The statement of the practice gives three clues: *automatically*
build the *whole* system and run *all* of the tests in ten minutes. If your
process isn't automated, that's the first place to start. The you may be
able to build only the part of the system you have changed. Finally, you may
be able to run only tests covering the part of the system at risk because of
the changes you made."

So, my advice is don't build the whole system from scratch or don't run all
the tests. However, this is an approximation of the ideal situation. I also
advise working continuously to bring more of the tests and more of the
complete build into the ten-minute build. Does that answer your question?

Kent Beck
Three Rivers Institute

-----Original Message-----
From: jxg3776 [mailto:jgraves-***@public.gmane.org]
Sent: Tuesday, January 04, 2005 7:19 AM
To: xpbookdiscussiongroup-***@public.gmane.org
Subject: [xpe2e] Re: Practice: Ten-Minute Build
Post by Kent Beck
Automatically build the whole system and run all of the tests in ten
minutes.
Question: what would you recommend once you are at the stage of
automatically building the whole system and running all of the tests
but it takes much longer than ten minutes to do so?
Example 1: Rebuilding the Linux kernel from sources takes much longer
than ten minutes even without tests.
Example 2: I have another project with a very short build and unit
test time, but the customer tests take longer to run. And that is not
a particularly large project, but there are enough "things for the
customer to test"...






Yahoo! Groups Links
Brad Appleton
2005-01-07 08:26:30 UTC
Permalink
Post by Kent Beck
"The ten-minute build is an ideal. What do you do on your way to the
ideal? The statement of the practice gives three clues: *automatically*
build the *whole* system and run *all* of the tests in ten minutes. If your
process isn't automated, that's the first place to start. The you may be
able to build only the part of the system you have changed. Finally, you may
be able to run only tests covering the part of the system at risk because of
the changes you made."
While it may be obvious to some, I think its important to mention ... "you may be able to build only the part of the system you have changed" could be interpreted in at least two ways:
a) INCREMENTAL SYSTEM BUILD (rather than full build) of the *whole* system

b) COMPONENT BUILD - an incremental or full build of a subset of the whole system (implies only component-wide testing as well, rather than system-wide)

When faced with these two choices, I would say ALWAYS prefer building+testing the WHOLE SYSTEM whenever feasible. In particular, if its not feasible to do a full build of the whole system in 10 minutes, then its better to do an incremental system build than a full component build. Working with the *whole* instead of just a sub-part is more important working with a fully (rather than incrementally) built component.

Adding more levels/layers of integration+build has a greater cost/risk against feedback and "wholeness" than using incremental reproductions of the *whole*.

That said, with systems that are millions of lines of code or larger, it is common to see a "team of teams" approach with a team-per-component (or at least, several independently buildable+testable components even if its all done by one team). So doing an "incremental" rebuild of the whole system may still be infeasible to do in <10 minutes in such cases.

In such situations, one can make another compromise while still relinking and testing the *whole* system. Even if you can't incrementally build the "whole" system, you can relink+retest with a full/incremental build of the changed component PLUS the "last good build" of all the other components. This avoids spending any time on build/compile (and dependency computation) for all but the changed component, and reuses previously built+linked component versions (even tho they may not quite correspond to the latest source code of the other components, they may only be a close approximation of the "latest").

One very common way of doing this is with one or more staging areas and a corresponding distribution strategy for when & how often to "push" built components to the staging area. See the article "Continuous Staging: Scaling Continuous Integration to Multiple Component Teams" at:
<http://www.cmcrossroads.com/newsletter/articles/agilemar04.pdf>
Post by Kent Beck
So, my advice is don't build the whole system from scratch or don't run all
the tests. However, this is an approximation of the ideal situation. I also
advise working continuously to bring more of the tests and more of the
complete build into the ten-minute build. Does that answer your question?
--
Brad Appleton <brad-***@public.gmane.org> www.bradapp.net
Software CM Patterns (www.scmpatterns.com)
Effective Teamwork, Practical Integration
"And miles to go before I sleep." -- Robert Frost
Jim Shore
2005-01-06 00:06:22 UTC
Permalink
I often see team moved to create two suites: one long running (over
night and on the integration server) and another that is often called
something like a "commit suite."
Not that there is a conflict, but I would like to know how this fits
into the ten-minute build idea. I raised the ten minute build ideal to a
group yesterday and their first question was "does that mean the story
tests as well?"
I would say, yes, it includes the story tests. The trick is that story
tests don't need to be end-to-end integration tests. Using end-to-end
tests for story tests leads to slow test times. They're also a lot
harder to write, read, and modify.

Instead, I use more focused tests that talk about the essence of the
story. This takes less time and is more customer-friendly, giving me
the opportunity to involve the customer more and get more tests. Since
I use Fit, I look for ways to use the rule-oriented ColumnFixture more
and the step-by-step ActionFixture less.

I see end-to-end tests as developer tests, not customer tests. They
check for cross-component interactions. Once I have a large suite of
unit tests and story tests, a limited number of focused end-to-end tests
suffices to check whether things are hoooked together properly. I use
exploratory testing as a further sanity check.

Jim
--
Jim Shore
Titanium I.T. LLC - Making IT Profitable
I coach XP and Lean Development. Now available in Portland, Ore.

phone: 503-267-5490
email: jshore-***@public.gmane.org
david.hussman
2005-01-05 22:54:09 UTC
Permalink
I often see team moved to create two suites: one long running (over
night and on the integration server) and another that is often called
something like a "commit suite."

Not that there is a conflict, but I would like to know how this fits
into the ten-minute build idea. I raised the ten minute build ideal to a
group yesterday and their first question was "does that mean the story
tests as well?"

-----Original Message-----
From: Kent Beck [mailto:kentb-***@public.gmane.org]
Sent: Wednesday, January 05, 2005 4:05 PM
To: xpbookdiscussiongroup-***@public.gmane.org
Subject: RE: [xpe2e] Re: Practice: Ten-Minute Build


That's what I was trying to get at when I wrote:
"The ten-minute build is an ideal. What do you do on your way to the
ideal? The statement of the practice gives three clues: *automatically*
build the *whole* system and run *all* of the tests in ten minutes. If
your
process isn't automated, that's the first place to start. The you may be
able to build only the part of the system you have changed. Finally, you
may
be able to run only tests covering the part of the system at risk
because of
the changes you made."

So, my advice is don't build the whole system from scratch or don't run
all
the tests. However, this is an approximation of the ideal situation. I
also
advise working continuously to bring more of the tests and more of the
complete build into the ten-minute build. Does that answer your
question?

Kent Beck
Three Rivers Institute

-----Original Message-----
From: jxg3776 [mailto:jgraves-***@public.gmane.org]
Sent: Tuesday, January 04, 2005 7:19 AM
To: xpbookdiscussiongroup-***@public.gmane.org
Subject: [xpe2e] Re: Practice: Ten-Minute Build
Post by Kent Beck
Automatically build the whole system and run all of the tests in ten
minutes.
Question: what would you recommend once you are at the stage of
automatically building the whole system and running all of the tests
but it takes much longer than ten minutes to do so?
Example 1: Rebuilding the Linux kernel from sources takes much longer
than ten minutes even without tests.
Example 2: I have another project with a very short build and unit
test time, but the customer tests take longer to run. And that is not
a particularly large project, but there are enough "things for the
customer to test"...






Yahoo! Groups Links











Yahoo! Groups Links
Amir Kolsky
2005-01-11 08:44:59 UTC
Permalink
There are two things that we tried.

1. Profile the tests wrt compilation units (or some other discernable code
unit). This way when a specific unit is changed, you only run the tests that
are related to it. This is relatively easy to do by using a profiler: Run
test, see affected code, rinse, repeat. This is a compelling argument for
making very small tests.

2. Distribute the tests - change your teting framework so that it will
dispatch and retrieve test jobs rather than execute them directly. This can
apply to both the unit and story tests...

Amir Kolsky
XP& Software



_____

From: david.hussman [mailto:david.hussman-XW8q3tYMaZQKciORVfTcnwC/***@public.gmane.org]
Sent: Thursday, January 06, 2005 12:54 AM
To: xpbookdiscussiongroup-***@public.gmane.org
Subject: RE: [xpe2e] Re: Practice: Ten-Minute Build


I often see team moved to create two suites: one long running (over
night and on the integration server) and another that is often called
something like a "commit suite."

Not that there is a conflict, but I would like to know how this fits
into the ten-minute build idea. I raised the ten minute build ideal to a
group yesterday and their first question was "does that mean the story
tests as well?"

-----Original Message-----
From: Kent Beck [mailto:kentb-***@public.gmane.org]
Sent: Wednesday, January 05, 2005 4:05 PM
To: xpbookdiscussiongroup-***@public.gmane.org
Subject: RE: [xpe2e] Re: Practice: Ten-Minute Build


That's what I was trying to get at when I wrote:
"The ten-minute build is an ideal. What do you do on your way to the
ideal? The statement of the practice gives three clues: *automatically*
build the *whole* system and run *all* of the tests in ten minutes. If
your
process isn't automated, that's the first place to start. The you may be
able to build only the part of the system you have changed. Finally, you
may
be able to run only tests covering the part of the system at risk
because of
the changes you made."

So, my advice is don't build the whole system from scratch or don't run
all
the tests. However, this is an approximation of the ideal situation. I
also
advise working continuously to bring more of the tests and more of the
complete build into the ten-minute build. Does that answer your
question?

Kent Beck
Three Rivers Institute

-----Original Message-----
From: jxg3776 [mailto:jgraves-***@public.gmane.org]
Sent: Tuesday, January 04, 2005 7:19 AM
To: xpbookdiscussiongroup-***@public.gmane.org
Subject: [xpe2e] Re: Practice: Ten-Minute Build
Post by Kent Beck
Automatically build the whole system and run all of the tests in ten
minutes.
Question: what would you recommend once you are at the stage of
automatically building the whole system and running all of the tests
but it takes much longer than ten minutes to do so?
Example 1: Rebuilding the Linux kernel from sources takes much longer
than ten minutes even without tests.
Example 2: I have another project with a very short build and unit
test time, but the customer tests take longer to run. And that is not
a particularly large project, but there are enough "things for the
customer to test"...






Yahoo! Groups Links











Yahoo! Groups Links









_____

Yahoo! Groups Links


* To visit your group on the web, go to:
http://groups.yahoo.com/group/xpbookdiscussiongroup/


* To unsubscribe from this group, send an email to:
xpbookdiscussiongroup-unsubscribe-***@public.gmane.org
<mailto:xpbookdiscussiongroup-unsubscribe-***@public.gmane.org?subject=Unsubscrib
e>


* Your use of Yahoo! Groups is subject to the Yahoo! Terms of Service
<http://docs.yahoo.com/info/terms/> .
J. B. Rainsberger
2005-03-14 04:38:18 UTC
Permalink
I often see team moved to create two suites: one long running (over
night and on the integration server) and another that is often called
something like a "commit suite."
Not that there is a conflict, but I would like to know how this fits
into the ten-minute build idea. I raised the ten minute build ideal to a
group yesterday and their first question was "does that mean the story
tests as well?"
If the story tests take overnight to run, then I would ask some questions:

* do the story tests duplicate routine error scenarios?
* do the story tests verify business logic using the UI as a tool?
* if I didn't run /this/ story tests here, would you customers have so
much less confidence in the presence of the feature it tests?

A suite that takes that long to run might be trying to do too much.
--
J. B. (Joe) Rainsberger
Diaspar Software Services
http://www.diasparsoftware.com
Author, JUnit Recipes: Practical Methods for Programmer Testing
jxg3776
2005-01-06 16:19:34 UTC
Permalink
Post by Kent Beck
So, my advice is don't build the whole system from scratch or don't
run all
Post by Kent Beck
the tests. However, this is an approximation of the ideal situation.
I also
Post by Kent Beck
advise working continuously to bring more of the tests and more of the
complete build into the ten-minute build. Does that answer your
question?

That is what I expected: I raised the question because the _priority_
of the three steps seemed unclear to me in the way the practice was
stated. How do you justify the effort required to approach the ideal,
and is there a recommended order that you take the steps in?

In the metaphor of cycles, anything I have that is outside the time
frame of ten minutes goes into a longer cycle, but I always have a
short cycle of ten minutes that builds and tests everything that can
be done inside of ten minutes (actually six minutes on the slow
project). The slow tests run in another cycle giving more detailed but
less frequent feedback (unfortunately, five hours). I am comfortable
with the level of automation, coverage, and feedback we get from this
two cycle approach, but I could see it running afoul of the practice
as stated with one potential recommendation being jettisoning some of
the complexity of the customer tests (as a means of getting them to
run in a faster time). That does not seem right to me.
Post by Kent Beck
-----Original Message-----
Sent: Tuesday, January 04, 2005 7:19 AM
Subject: [xpe2e] Re: Practice: Ten-Minute Build
Post by Kent Beck
Automatically build the whole system and run all of the tests in ten
minutes.
Question: what would you recommend once you are at the stage of
automatically building the whole system and running all of the tests
but it takes much longer than ten minutes to do so?
Example 1: Rebuilding the Linux kernel from sources takes much longer
than ten minutes even without tests.
Example 2: I have another project with a very short build and unit
test time, but the customer tests take longer to run. And that is not
a particularly large project, but there are enough "things for the
customer to test"...
Yahoo! Groups Links
Kent Beck
2005-01-10 11:06:14 UTC
Permalink
Thank you for the clarifying response. I am not an SCM expert, so my short
discussion wasn't as clear as it could have been. The goal is to get
feedback and catch errors as quickly as possible. I've saved Brad's
recommendations for doing whole-system builds for future reference.

On a related note, I have a different view of practices than the one I read
in your message. When you speak of "running afoul" of a practice, it sounds
to me like you see practices as a commandment or law which can be violated.
If you follow them you are good, if you don't you are bad.

Setting up absolute judgements of behavior is not my intention in writing
down practices. My goal is to get people thinking about their own practices
and to give them an opportunity to be accountable. Ten-Minute Build, to my
mind, is not something you can "run afoul of". It the spark of an idea.
"Hey, we're getting too many interruptions from post-integration defects.
Let's think about Ten-Minute Build for a second. Is the build not running
often enough because it takes half an hour? Are we not including the right
tests? Are we not rebuilding enough of the system?"

I used to suggest separating the tests into two tiers. I assumed that
system-level tests would take a long time to run. Then I visited LifeWare.
They run all their tests on every integration. "Why wouldn't you want to
have that feedback." Well, I would. I just assumed it was impossible. That's
part of making the practice accountable. Do you have two tiers of tests,
some run more often than others? Why? If you can afford to run the second
tier less often, are they really that valuable or are they just a security
blanket? If they are valuable, keep them.

I heard a story recently about a team that was doing enterprise software.
They used to run a 90-day burn-in test. When they went to XP, they stopped
doing so, assuming that they would catch all the errors with other forms of
testing. The first release went out. A couple of months passed. The product
started falling over in the field. In retrospect, this team should have
retained their burn-in tests. I can imagine them "following" Ten-Minute
Build and eliminating them, which is the danger of treating the practices as
rules. If the team had been focused on retaining what was good about their
previous process but getting as much rapid feedback as possible too, they
would have done both forms of testing.

Once again, thanks for your comments.

Kent Beck
Three Rivers Institute

-----Original Message-----
From: jxg3776 [mailto:jgraves-***@public.gmane.org]
Sent: Thursday, January 06, 2005 8:20 AM
To: xpbookdiscussiongroup-***@public.gmane.org
Subject: [xpe2e] Re: Practice: Ten-Minute Build

...I am comfortable
with the level of automation, coverage, and feedback we get from this
two cycle approach, but I could see it running afoul of the practice
as stated with one potential recommendation being jettisoning some of
the complexity of the customer tests (as a means of getting them to
run in a faster time). That does not seem right to me.
Appleton Brad-BRADAPP1
2005-01-07 22:29:57 UTC
Permalink
Post by Laurent Bossavit
If I/O is dominant, should we expect build times to be strictly
proportional to the project's "disk size", given
roughly constant average file sizes ?
I think it's not so much the "disk size" of the project that dominates as it is the number of disk-accesses corresponding to the number of file access/opens calls. If you look at a typical C/C++ project with lots of #includes, you could run the compiler to produce just the preprocessor output. Then, rather than the number of lines of output, look at the number of #file directives (representing the number of files that had to be opened/accessed and context-switched) just to compile a single .c file.

Lakos' book is entirely focused on the "physical" (build+link+compile time) dependencies and "physical coupling" across files and folders/directories in a project. It is very illuminating and insightful in its discussion of how things that might not be noticed (much less be issues) for small projects can become VERY CRITICAL facctors for very large projects.

What if we could rid ourselves of the need to deal with the program source-code at the filesystem level? What if we used a GUI+IDE (such as ye old Visual Works, or some MDD-enhanced variant of Eclipse) to navigate the codebase via logical/conceptual entities rather than via filesystem entities?

Then the IDE could be responsible for visually presenting the code in the most humanly useful/navigable/maintainable fashion, while physically representing the code (and model and ...) on disk in the most efficient manner possible (with some configurable profile options using reasonable defaults). It might take us more time to "save/generate" an "image" of the code to disk, but then once we did, it could conceivably compile+build 10X or even 100X faster.

Maybe then it would be the "output" that dominated (number of loadable runtime "units") build-times more than the "input".
Appleton Brad-BRADAPP1
2005-01-10 21:24:57 UTC
Permalink
Post by Kent Beck
I used to suggest separating the tests into two tiers. I
assumed that system-level tests would take a long time to
run. Then I visited LifeWare. They run all their tests on
every integration. "Why wouldn't you want to have that
feedback." Well, I would. I just assumed it was
impossible. That's part of making the practice
accountable. Do you have two tiers of tests, some run
more often than others? Why? If you can afford to run the
second tier less often, are they really that valuable or
are they just a security blanket? If they are valuable, keep them.
I wonder if this is another case of YAGNI and Refactoring? Typically, the "simplest thing that could possibly work" would be a full build of the whole system running full tests. Splitting tests into two-tiers sounds like a "refactoring" of sorts: I am redistributing the coverage and execution of certain tests to elsewhere in the overall feedback cycle.
* If I forego certain tests or certain kinds of tests, then in terms of "coverage", its no longer a "behavior preserving" redistribution of the building+testing comprehensiveness.
* If I split into two-tiers, I still want to execute ALL the tests, but some of them perhaps less frequently, for various reasons that make it infeasible to do it more often.
* If I do incremental builds rather than full builds, then I still want to do full-builds to make sure I really and truly to have the latest + greatest state of EVERYTHING (code, data, etc.), but I may do the full-builds less frequently (again, for various reasons that make it infeasible to do it more often)

As always, the desire is to have as fast as possible feedback on the WHOLE system, as frequently as possible - so I can decrease the "learning latency period" between making a change/decision and discovering the FULL consequences of the change/decision so I can learn to respond (correct/adapt) as early as possible.
Appleton Brad-BRADAPP1
2005-01-11 22:50:55 UTC
Permalink
one long running (over night and on the
integration server) and another that is often
called something like a "commit suite."
Not that there is a conflict, but I would
like to know how this fits into the ten-minute
build idea. I raised the ten minute build ideal
to a group yesterday and their first question
was "does that mean the story tests as well?"
Steve Berczuk and myself co-wrote an article on "Agile Build Management" that addresses some of your questions. See <http://www.cmcrossroads.com/articles/agileoct03.pdf>

Other "Agile CM" articles that might relate to this discussion:
- "Approaching Continuous Integration"
<http://www.cmcrossroads.com/article/35638>
- "Agile Build Promotion: Navigating the Ocean of Promotion Notions"
<http://www.cmcrossroads.com/article/32900>
- "Continuous Staging: Scaling Continuous Integration to Multiple Component Teams"
<http://www.cmcrossroads.com/newsletter/articles/agilemar04.pdf>
- "Codeline Merging & Locking: Continuous Updates & Two-Phased Commits"
<http://www.cmcrossroads.com/articles/agilenov03.pdf>
- "Build Management for an Agile Team"
<http://www.cmcrossroads.com/articles/agileoct03.pdf>
- "Beyond Continuous Integration: A Holistic Approach to Build Mgmt"
Part 1: <http://www.cmcrossroads.com/articles/mzoct03.pdf>
Part 2: <http://www.cmcrossroads.com/articles/mznov03.pdf>
- "Continuous Integration - Just another buzz word?"
<http://www.cmcrossroads.com/articles/agilesep03.pdf>
Loading...