[LRUG] Final lineup & registration details for February

Makoto Inoue inouemak at googlemail.com
Tue Feb 8 04:40:57 PST 2011


Hi, Neil.

Thank you very much for the detailed response. This is really useful info.

Can I ask one more question?

In any of your actions, are there anything which takes longer than one
second and hard to cache?

The app I have in mind using Heroku have some requests which acts as a proxy
to external third party API and takes several seconds for requests to come
back, so it quickly queues up.

A few workaround we can think of are as follows, but do you think of any
other (better) ways?

A: respond quickly by putting the external API call to backgroundjob, and
check the response via ajax periodically (could use commet or websocket
alternatively).
B: separate this long running request to different component (eg: node.js on
heroku or other PAAS) and call this via cross domain ajax call.
C: Don't use Heroku. Heroku is not meant for that kind of purpose.

Thanks.

Makoto

On Tue, Feb 8, 2011 at 10:34 AM, Neil Middleton <neil.middleton at gmail.com>wrote:

> Hi Makoto
>
> On Tue, Feb 8, 2011 at 8:54 AM, Makoto Inoue <inouemak at googlemail.com>wrote:
>
>>
>> 1. What's the max request per second when you were stormed with access and
>> how many dynos did you crank up at most(You showed us some graphs of
>> thousands of access after the ads, was it rpm result from NewRelic or stats
>> from google analytics )? I think Heroku has limit of 25 dynos per app. Is it
>> sufficient to handle huge traffic?
>>
>
> At peak,we were seeing around 100 requests per second (actually a tad
> under, 5,6)00 per minute).  As we wanted to ensure that we never starting
> forming a queue we cranked up in increments of 10 dynos, up to around 45 if
> I recall correctly.  Once we were confident that we had more than enough we
> started to wind down to find the 'happy place'.  As we hadn't spent any time
> optimising we had a page request average of ~200ms so we knew, in theory,
> that a single dyno could manage 4-5 requests / sec although in reality you
> don't really get that...
>
> The reason for wanting to avoid a queue is because having a queue gives you
> two problems.  One, having to process the current requests, and clear the
> queue, but also having to still process the incoming requests.
>
> Heroku limits you to 100 dynos out of the box (the UI only takes up to 25),
> but you can go for more with the CLI that Heroku provide.  Assuming that you
> can get a 100ms response time, then just under 1,000 requests/second should
> be doable (for five bucks an hour).  Saying that, if you talk to Heroku
> support you can probably get more dynos if you need them.
>
> 2. When do you know you should crank up/down dynos. I don't think Heroku
>> offer autoscaling like Google App Engine does. Did you keep monitoring
>> NewRelic, or did you have some sort of monitoring system to alert you when
>> certain metrics reaches some threshold?
>>
>
> There are tools coming out that allow you to autoscale, but we were doing
> it manually.  Heroku really need to add this as it would be a massive help.
>
> 3. What kind of controller actions, active record queries did you find as a
>> bottleneck and how did you solve (or cranking up dynos enough to ignore
>> these bottlenecks)?
>>
>
> The only caching we were doing (via memcached) was page HTML fragments
> which we were then building the pages up from.  Given this, the overall
> performance of the app didn't really change.  The DB layer stayed nice and
> fast, and the dynos were chugging away quite happily.
>
> HTH
>
> Neil Middleton
>
> http://about.me/neilmiddleton
>
>
> _______________________________________________
> Chat mailing list
> Chat at lists.lrug.org
> http://lists.lrug.org/listinfo.cgi/chat-lrug.org
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lrug.org/pipermail/chat-lrug.org/attachments/20110208/97041fb8/attachment-0001.html>


More information about the Chat mailing list