[LRUG] Large Slow Test Suites

Mark Burns markthedeveloper at gmail.com
Thu Dec 5 00:07:09 PST 2013


Another suggestion I just remembered that might help with the feedback loop
at least, and I was surprised to only learn about it recently is zeus.
https://github.com/burke/zeus/

The tag line, if I had to think of one would be spork but less flakey.


On 5 December 2013 07:50, Graham Ashton <graham at effectif.com> wrote:

> On 4 Dec 2013, at 22:35, Mark Burns <markthedeveloper at gmail.com> wrote:
>
> > Was going to suggest the --profile flag until I realised that is just
> for RSpec.
> >
> > Might sound like a silly question, but have you looked at profiling the
> suite for the slowest ones? It could help two-fold, help spot the actual
> tests that are taking up the time, or maybe even better, help spot the
> pattern behind what is taking the most time across all the tests.
>
> You’ve reminded me of one of my favourite features of Tconsole (a
> test-unit test runner).
>
> It has a !timings command that will report how long each of the tests you
> just ran took, sorting the output so the slowest tests appear at the top.
> You can see it in the video at around 2 minutes 50 seconds:
>
> http://vimeo.com/37641415
>
> Quite apart from showing timings, Tconsole really comes into its own when
> you’re working with a slow test suite, giving you some of your productivity
> back immediately.
>
> You can ask it to:
>
> - Only run the tests that failed in your last run of the suite (especially
> useful after you’ve run a 30 minute test suite, to find that a handful
> failed).
>
> - Run the tests for a list of test classes (e.g. just those that cover the
> area you’re working on); you specify them by name on the command line.
>
> It also has a fast mode, where it stops as soon as it gets the first test
> failure (though I find this less useful than running the specific tests
> that are currently relevant, it’s great when you’ve got a lot of failures).
>
> In terms of a process for dealing with a 30 minute test suite, I’d
> definitely be exploring speed improvements, but I’d investigate whether I
> needed them all first.
>
> I’ve had great success with deleting less useful (or duplicated) tests,
> and not just the slow ones; every test needs to give more value to the
> project than the cost of keeping it around.
>
> What stage is the app at? Is it live with lots of customers, or is it
> early days? Where the business is up to would have an impact on how
> ruthless I’d be with carving off bits of the test suite.
>
> I once took over a project in which we deleted all the acceptance tests on
> the first afternoon. There were loads of them, and they kept breaking as we
> were adding new features. We were wasting a lot of time keeping them
> running.
>
> The project was on an extremely tight budget, and keeping the acceptance
> tests would have wasted a lot of the cash. Controller tests were written in
> RSpec, so we turned on view rendering (which would catch any failures to
> render the templates - the most likely failures caught by the integration
> tests) and deleted the acceptance tests. Nothing went wrong, and we were
> able to get a lot done quickly on a tiny budget. A few integration tests
> were written later on to cover the happy path of the most important parts
> of the site (e.g. the sign up pages).
>
> We might not have done it that way if we’d had thousands of users, but it
> worked great in that context.
>
> Cheers,
> Graham
>
> --
> Graham Ashton
> Founder, Agile Planner
> https://www.agileplannerapp.com | @agileplanner | @grahamashton
>
> _______________________________________________
> Chat mailing list
> Chat at lists.lrug.org
> http://lists.lrug.org/listinfo.cgi/chat-lrug.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lrug.org/pipermail/chat-lrug.org/attachments/20131205/16721670/attachment.html>


More information about the Chat mailing list