Hi, Neil.<div><br></div><div>Thank you very much for the detailed response. This is really useful info.</div><div><br></div><div>Can I ask one more question?</div><div><br></div><div>In any of your actions, are there anything which takes longer than one second and hard to cache? </div>
<div><br></div><div>The app I have in mind using Heroku have some requests which acts as a proxy to external third party API and takes several seconds for requests to come back, so it quickly queues up.</div><div><br></div>
<div>A few workaround we can think of are as follows, but do you think of any other (better) ways?</div><div><br></div><div>A: respond quickly by putting the external API call to backgroundjob, and check the response via ajax periodically (could use commet or websocket alternatively).</div>
<div>B: separate this long running request to different component (eg: node.js on heroku or other PAAS) and call this via cross domain ajax call.</div><div>C: Don't use Heroku. Heroku is not meant for that kind of purpose.</div>
<div><br></div><div>Thanks.</div><div><br></div><div>Makoto</div><div><br><div class="gmail_quote">On Tue, Feb 8, 2011 at 10:34 AM, Neil Middleton <span dir="ltr"><<a href="mailto:neil.middleton@gmail.com">neil.middleton@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">Hi Makoto<br><br><div class="gmail_quote"><div class="im">On Tue, Feb 8, 2011 at 8:54 AM, Makoto Inoue <span dir="ltr"><<a href="mailto:inouemak@googlemail.com" target="_blank">inouemak@googlemail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div><br></div><div>1. What's the max request per second when you were stormed with access and how many dynos did you crank up at most(You showed us some graphs of thousands of access after the ads, was it rpm result from NewRelic or stats from google analytics )? I think Heroku has limit of 25 dynos per app. Is it sufficient to handle huge traffic?</div>
</blockquote><div><br></div></div><div>At peak,we were seeing around 100 requests per second (actually a tad under, 5,6)00 per minute). As we wanted to ensure that we never starting forming a queue we cranked up in increments of 10 dynos, up to around 45 if I recall correctly. Once we were confident that we had more than enough we started to wind down to find the 'happy place'. As we hadn't spent any time optimising we had a page request average of ~200ms so we knew, in theory, that a single dyno could manage 4-5 requests / sec although in reality you don't really get that...</div>
<div><br></div><div>The reason for wanting to avoid a queue is because having a queue gives you two problems. One, having to process the current requests, and clear the queue, but also having to still process the incoming requests.</div>
<div><br></div><div>Heroku limits you to 100 dynos out of the box (the UI only takes up to 25), but you can go for more with the CLI that Heroku provide. Assuming that you can get a 100ms response time, then just under 1,000 requests/second should be doable (for five bucks an hour). Saying that, if you talk to Heroku support you can probably get more dynos if you need them.</div>
<div class="im">
<div><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div>2. When do you know you should crank up/down dynos. I don't think Heroku offer autoscaling like Google App Engine does. Did you keep monitoring NewRelic, or did you have some sort of monitoring system to alert you when certain metrics reaches some threshold?</div>
</blockquote><div><br></div></div><div>There are tools coming out that allow you to autoscale, but we were doing it manually. Heroku really need to add this as it would be a massive help. </div><div class="im"><div><br>
</div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div>3. What kind of controller actions, active record queries did you find as a bottleneck and how did you solve (or cranking up dynos enough to ignore these bottlenecks)?</div></blockquote><div><br></div></div><div>The only caching we were doing (via memcached) was page HTML fragments which we were then building the pages up from. Given this, the overall performance of the app didn't really change. The DB layer stayed nice and fast, and the dynos were chugging away quite happily. </div>
<div><br></div><div>HTH</div><div><br></div></div><div>Neil Middleton</div><div><br></div><div><a href="http://about.me/neilmiddleton" target="_blank">http://about.me/neilmiddleton</a></div><br>
<br>_______________________________________________<br>
Chat mailing list<br>
<a href="mailto:Chat@lists.lrug.org">Chat@lists.lrug.org</a><br>
<a href="http://lists.lrug.org/listinfo.cgi/chat-lrug.org" target="_blank">http://lists.lrug.org/listinfo.cgi/chat-lrug.org</a><br>
<br></blockquote></div><br></div>