[LRUG] Slides from last nights presentation
Tim Cowlishaw
tim at timcowlishaw.co.uk
Tue Aug 10 02:57:17 PDT 2010
On 10 Aug 2010, at 09:45, Mr Jaba wrote:
>
> Question I forgot to ask was, after the refactoring, how fast was that 1min page load?
Heh, I can't actually answer that right now, as the 1min page load was on a specific set of data in our production database that's long since gone (I think someone had set up their entire company's work as a single project, which meant it was doing rather a lot of work when something was rescheduled). I will dig it out of backups and try it again at some point though, as it'd be a really useful metric to judge how much we've improved.
To give a (slightly) more concrete answer, my back-of-an envelope calculations suggest that our old model scheduled new tasks in time of order O(n^2) where n is the number of tasks in a project, and the new model is O(nlogn), (as it's just Enumerable#sort), where n is the (smaller) number of tasks assigned to a specific user. Put even more concretely - while there's a cost associated with working out the priority of a task, we no longer need to recalculate this for *all* the tasks in that project, leading to a susbtantial saving.
The other advantage we have is that (while we haven't done this yet, as that *would* be a premature optimisation IMO) decoupling the scheduling code from the rest of the system as far as possible will make it far easier to use a work queue like structure to process these potentially-expensive operations outside of the request/response cycle if we need to - this would have been totally impossible under the old system.
So, the short answer is "no idea, but i'll find out!" We've been able to measure performance benefits to a reasonable degree in other ways though.
Cheers,
Tim
More information about the Chat
mailing list