[LRUG] Writing readable feature tests with RSpec

Graham Ashton graham at effectif.com
Tue Jul 29 03:33:01 PDT 2014


On 29 Jul 2014, at 10:22, Tom Stuart <tom at codon.com> wrote:

> A meta-point: this is one of the things that frustrates me about conversations on these topics.
> 
> People who are sceptical about X (RSpec, Cucumber, TDD generally, Ruby, etc) will say “I don’t know what’s going on with these X people. I just do the eminently sensible thing Y, and that works for me” — the implication being that the people in question are doing themselves and their clients a disservice by using the inferior X as an ersatz substitute for the superior Y.

Apologies if I’ve caused frustration, but I hope that - in this case - the implication is yours rather than mine. I wrote that email carefully, hoping I’d made it clear that I’ve been curious about the widely polarised opinions of bright, highly experienced people, and that Liz had given me food for thought. My aim was to share it, but perhaps I didn’t do that clearly.

> It invariably then turns out that everybody already agrees that Y is good and (hopefully) is already doing it anyway, and the people doing X are just using it as some kind of multiplier on Y, so no real understanding is transferred in either direction.

I too have noticed that developers can debate things energetically, only to later discover that they (we) actually agree on the salient points, and language/context caused people to think otherwise.

> To take this email as an example: “I do discovery with stakeholders, on paper, away from the computer”. Me too! That bit’s really important!

I was explaining the “mechanics" (thanks Anthony) that I use simply as a way of trying to illustrate why *I* don’t get any value from writing scenarios. I think examples are a useful way to clarify discussion, and describing my approach was as close as I could get. I hope I didn’t imply that I thought everybody should do as I do (though on re-reading my email, I think I started to stray slightly from this path in my penultimate sentence).

I suppose part of the problem is that when we read emails like mine we read them in a personal context. Depending on our own viewpoint, it might be the context of “oh, here’s a cucumber naysayer - this email might frustrate me” or “here’s a cucumber naysayer, hurrah!”. Or it might be the context in which I wrote it - “I find it fascinating that I hadn’t noticed that just because you’ve written a scenario, you don’t have to automate it - might this explain differences of opinion?”.

> And then when I’ve done that, I find it useful to take what I’ve learned and write it down again in a more formal way so that it can be seen by the computer and by other developers. Writing it down again is useful.

Me too! That bit’s really important!

I just choose to use a different dialect.

> The computer can automatically check what I’ve written as part of a continuous integration process. Other developers can check what I’ve written and quickly understand what’s actually important about the software (the “why”) rather than the details of its operation (the “what”). It takes time to write it down again, but the benefits outweigh the costs.

I think “why" is the most important bit, but (at risk of going off topic) prefer not to use scenarios to record it as I don’t think they can describe the “business" context adequately.

> I don’t have a solution to this; I don’t expect my descriptions of the above techniques to convince anyone who doesn’t already believe they’re useful. But I wanted to acknowledge the problem even if I don’t yet have a way to write its solution down.

Yeah, fair.

You’ve reminded me of a blog post by Steve Freeman that touches on how experts make decisions on subjects like this. I’m thinking of this paragraph in particular:

> [I]t turns out that people don’t actually spend their time carefully working out the trade-offs and then picking the best option. Instead, we employ a “first-fit” approach: work through an ordered list of learned responses and pick the first one that looks good enough. All of this happens subconsciously, then our slower rational brain catches up and justifies the existing decision—we can’t even tell it’s happening. Being an expert means that we’ve built up more patterns to match so that we can respond more quickly and to more complicated situations than a novice, which is obviously a good thing in most situations. It can also be a bad thing because the nature of our perception means that experts literally cannot receive certain kinds of information that falls outside their training, not because they’re inadequate people but because that’s how the brain works.

From http://www.higherorderlogic.com/2008/06/test-driven-development-a-cognitive-justification/

Perhaps this isn’t a problem to be solved by writing stuff down? I think “learned responses” will only include things we’ve used ourselves, rather than things we’ve read about.

Cheers,
Graham


-- 
Graham Ashton
Founder, Agile Planner
https://www.agileplannerapp.com | @agileplanner | @grahamashton




More information about the Chat mailing list