[LRUG] TDD (was: Ruby Contracting)
James Adam
james at lazyatom.com
Thu Feb 19 07:41:13 PST 2009
Apologies for how long this response is, everyone.
On 19 Feb 2009, at 13:16, Eleanor McHugh wrote:
> If you build a test suite of 6 KSLOC for a shrink-wrap you'll have
> 20 defects which may not seem like many until you realise that
> mainstream TDD lacks tools for identifying those defects. It's not
> self-testing. Now a defect isn't necessarily a bug in the 'this line
> of code is incorrect' sense although if they were it would make our
> lives much easier. Unfortunately it's often the result of
> misunderstood or missing requirements and that's beyond the
> competence of a machine to identify.
You have not demonstrated that 'TDD' isn't useful here, merely stated
that there are some defects which it cannot cover, which is a far less
interesting point.
> As Chris demonstrates in his post, in the hands of a skilled
> developer with experience this still isn't a problem: poor
> requirements leave certain kinds of code smells and once those are
> identified you can do another round of client interrogation and
> analysis. That's where manual testing enters the frame. But this is
> an aesthetic sense that many younger developers lack and which a
> heavy reliance on automated testing can prevent them developing.
I'm not sure what you're suggesting. It sounds like you're saying that
manual testing is better at picking up these 'defects' than automated
testing is. However, no amount of manual testing would address defects
that are a result of misunderstandings between the customer and the
development team.
Furthermore, it sounds like you're suggesting that 'good [manual]
testing is a craft, and younger developers are less likely to develop
because they practice TDD'. Again, I'm not sure of your basis for
this, as TDD, or any form of automated testing, is simply a single
tool in a whole suite of approaches that may be useful. If you mean to
say 'TDD alone isn't enough', just say that?
> You don't become a good programmer by writing code, you become a
> good programmer by reading it. Lots of it.
This is a sentence that reads like wisdom, but is it? Evidence please.
A pithy analogy: you don't become a good surgeon by performing
operations; you become a good surgeon by reading lots of anatomy books.
> Writing meaningful tests is likewise a difficult aesthetic to
> develop. I started off with something of an advantage as I studied
> physics at university and spent a lot of time performing experiments
> (many of which were intensely dull) and analysing the results. When
> you do that day in and day out for three or four years you soon
> realise that accurate testing is a very different skill to framing
> hypotheses: the experimentalist and the theoretician both have their
> place and by only utilising the skills of one or the other science
> wouldn't progress very far. Our theoreticians are the analysts,
> architects whilst our experimenters used to be the QA testers.
> Programmers straddle the divide, but in general they're more on the
> theoretician side and our career path generally supports that.
>
> As a result programmers are in general the worst people in the world
> to be testing code - at least their own code - as their theoretical
> understanding of what needs to be done can and will interfere with
> the impartiality of those tests. There are ways around that such as
> having developers write tests for one set of requirements and then
> write code to fulfil the tests written by one of their colleagues
> but that's only practical if they're either working in a relatively
> large team (like a ThoughtWorks pod) or a very disciplined
> environment.
I also don't feel like you're got a great basis for generalisation
here. Your main point ("their theoretical understanding of what needs
to be done can and will interfere with the impartiality of those
tests") presumes that programmers are not capable of stepping back
from the mechanics and patterns of software development, and so not
capable of expressing the goals of software (the "hypothesis, method,
results" examination, to continue your analogy). The developers that I
have worked with don't fit into that pigeonhole.
Where you say "programmers are in general the worst people in the
world to be testing code", we find another phrase that reads like
wisdom, but lacks any support. I mean -
There are a lot of bad programmers
AND Bad programmers tend to write bad tests
.: Programmers write bad tests for their own code
... well, I don't even know if I agree with that, but regardless, in
this case, your reaction ('don't let programmers write tests for their
own code') is a sweeping generalisation at best, and a discredit to
ever programmer who is trying to become more than the "theoretician"
you might imagine them to be.
> Once you start moving to this approach though a lot more can be
> achieved by using collaborative offline code reviews (i.e. printing
> the code out and going through it by hand, marking up the bits that
> don't make sense with a nice bright highlighter). Most large
> companies that championed code reviews made them unworkable by tying
> them to performance metrics but I know quite a few old school
> hackers who never debug any other way. Indeed for some of the
> embedded systems I've worked on where the emulators were buggy as
> hell this was the only way to accurately reason about the code and
> much of what I take for granted as common instinct is the result of
> following that process.
While I applaud your instincts, perhaps your in-your-brain debuggery
is not a gift shared by the lesser 99.999% of us. While "the emulator
had bugs so we had to fix it by hand" is certainly a great story, not
every project is like Apollo 13.
AND you clearly hate trees.
> So why should code reviews be more effective than tests? Surely
> tests likewise allow us to reason about code? Because code reviews
> get straight to the point: identifying bad code and doing something
> about it. Test suites on the other hand focus on the much more
> nebulous and costly process of identify good code.
But your complaint against automated testing at the top is that while
it does help with bad code, it's no use for misinterpreted
requirements. Code reviews won't help with that either.
> But what if eliminating ninety percent of those defects can be
> achieved without a test suite (which in my experience it can) then
> the role of testing in a 30K codebase is restricted even further to
> finding just 10 defects. How well that expense can then be justified
> will vary from project to project, just as the defect count varies
> based on team experience, but I don't believe it's a priori
> demonstrable for all projects that a test suite is a desirable
> investment of developer effort.
So here you say "in my experience it can", and the tools you've
offered up are
a) a good appreciation of the aesthetic of manual testing, and
b) pouring over code by hand
c) ... just having your experience.
> I know there's an argument that such test suites make regression
> testing easier, so perhaps if we expand the design brief to specify
> that the system design must be mutable that alone becomes a good
> argument in favour of [B|T]DD methodologies. I'd personally say that
> depends on the level of coupling required by the system architecture
> - sometimes components need to be tightly coupled for performance
> reasons - but that's a different discussion.
It probably is a different discussion, and a more relevant one.
There's never a design brief that says the system must be "mutable",
but there's *always* a requirement that a system should be maintainable.
Nobody cares about the design; it is simply a side-effect of
fulfilling the goals of the software. Customers care that the software
does what it is supposed to, and that its behaviour can be changed
over time as appropriate to their needs. See later.
> I know that [B|T]DD can produce some amazing results when applied
> intelligently but as the de facto methodology of our age it is as
> blind and dumbheaded as the waterfall and SSADM. Ultimately this
> industry-wide obsessions are nothing more than ways of disguising a
> basic truth: programmers are a lot more like writers than machines
> and the quality of their code has more to do with personal passion
> than anything to do with established process.
>
> A couple of years ago there was a lot of passion in Agile and [B|
> T]DD is one of the fruits of that. As usual early adopters received
> big initial gains compared to larger players and so everyone has
> rushed to join the bandwagon. These methodologies in the hands of
> someone who loves and nurtures them will work well but that tells us
> more about the developer than about the methodology.
I think we're getting closer to your actual point. Certainly TDD and
Agile could be described as you have, but then we might also
characterise "wearing clothes" as an obsession - after all, we humans
have only been doing that for a fraction of the time we've been on the
planet (I think it might stick though).
Maybe OO programming is an obsession? Or <insert your paradigm here>?
Personally, I think the romanticised-notion-of-a-hacker-pouring-over-
code-and-finding-bugs-through-their-meat-brains-alone is a bit of an
obsession in some quarters, but that's just me.
You state that the quality of code is more closely correlated with
passion than with process, but I think you've missed something subtle.
Programmers to are passionate are surely more likely to care about
their process - about their craft, if you will - than programmers who
are not. It's not the passion itself that leads to great code, but the
way that passion alters the programmer 's relationship to their code
and their peers.
"Programmers are a lot more like writers than machines..." - it's
appealing to consider ourselves more like writers, or artists, than
machines, and I can understand the motivation. Who wants to be a
machine?
We might be like writers, taking the users of our software through
some kind of narrative. Or, we might be a bit like architects - our
customers have requirements, and we construct an artefact - a virtual
house, if you will - that supports their needs both functionally, but
can also be appreciated as an elegant solution.
However, show me the architect whose clients change their mind about
the arrangement of the rooms - even after the house has been built -
and who can rearrange them with complete confidence that the walls
will still support the roof, without going back to the drawing board.
Show me the writer who can re-arrange the chapters of their novel, or
change the relationships between their characters, whilst still being
sure that the narrative still delivers the same impact and emotional
nuance, without having a new set of editors pouring over the whole
manuscript from the very start.
These analogies obviously don't work, so while there are surface
similarities, there must be something fundamentally different about
the act of building software.
It's obvious that programming is a creative task, but it would be
foolish to ignore the fact that we are programming machines here. We
can take advantage of this by supplying two sets of descriptions of
our intentions that the machines can understand and check for us. One
is the test suite, the other is the implementation. They should be
written in different ways, so I'd argue against any blanket statement
that "tests are code" - that's obviously true on some levels, but
utterly misleading on others.
Automated testing might not be your bag, but that's fine, it's only a
means to an end - software that works and can be maintained. Many
people find it useful; some people don't. The flying spaghetti monster
will judge us all on the sum of our merits at the end of time.
I'll let you get back to your print outs now (I was joking about the
tree bit, although please do bear them in mind).
Thanks, and sorry everyone! Think about all the valuable tests I
could've written instead of this email, eh? :)
James
More information about the Chat
mailing list