<html><head><meta http-equiv="Content-Type" content="text/html charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class="">Hi Najaf<div class=""><br class=""></div><div class="">Over the years I’ve done quite a lot of this type of large batching of messages to be delivered, and we’ve typically solved the problem as follows:</div><div class=""><br class=""></div><div class=""><ul class="MailOutline"><li class="">A single job is added to your queue to broadcast your messages (emails or SMS). This is done synchronously from the user’s interface that they use to send the instruction so it should either fail or succeed. You can then safely confirm that the job is now queued.</li><li class="">That single job is then responsible for setting up individual jobs to deliver each of the messages. As you have stated, you don’t want this job to not be able to recover from where it left off in the case of catastrophic failure, so you really want this within a transaction of some sort. However, most job queues will be in a different database transactional context to your database itself, so it’s very difficult to get it to rollback correctly. So instead, this single job would iterate over the list of recipients and create a new record in the database for each message that is queued to be sent for that user - for example, if you had the tables users and email_campaigns, you would have a join table email_campaigns_users and for every email sent to a user you would add a record with a reference to the user & the email campaign. You could then either have a before_commit hook on that model that inserts a job into the job queue send a message to that user, or you could do this as part of the job that creates a record of sending a message to this user. Either way, if the job is not inserted into the queue successfully, the commit will fail which is exactly what you want. Finally, if this job fails at any point, it will be tried again and you need to ensure your query to build up a queue of users to send messages to excludes any users who have been sent the message.</li><li class="">The final job to deliver the actual message is now incredibly simple, so the job queue can easily distribute this job across all workers and the logic to to deliver the message should be incredibly simple.</li></ul><div class=""><br class=""></div><div class="">Hope that helps somewhat. </div><div class=""><br class=""></div><div class="">Matt</div><div class=""><a href="https://ably.io" class="">https://ably.io</a> </div><div><br class=""><blockquote type="cite" class=""><div class="">On 16 Mar 2015, at 09:12, Najaf Ali <<a href="mailto:ali@happybearsoftware.com" class="">ali@happybearsoftware.com</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><div dir="ltr" class="">Hi all,<div class=""><br class=""></div><div class="">I'm trying to identify some general good practices (based on real-life problems) when it comes to working with async job queues (think DJ, Resque and Sidekiq).</div><div class=""><br class=""></div><div class="">So far I've been doing this by collecting stories of how they've failed catastrophically (e.g. sending thousands of spurious SMS's to your customers) and seeing if I can identify any common themes based on those.</div><div class=""><br class=""></div><div class="">Here are some examples of what I mean (anonymised to protect the innocent):</div><div class=""><br class=""></div><div class="">* Having a (e.g. hourly) cron job that checks if a job has been done and then enqueues the job if it hasn't. It knows this because the successfully completed job would leave some sort of evidence of completion in e.g. the database. If your workers go down for a day, this means the same job would be enqueued over and over again superfluously.</div><div class=""><br class=""></div><div class="">* Sending multiple emails (hundreds) in a single job lead to a problem where if just one of those emails (say the 24th) fails to be delivered, the entire job fails and emails 1-23 get sent again when your worker retries it again and again and again.</div><div class=""><br class=""></div><div class="">* With the workers/app running the same codebase but on different virtual servers, deploying only to the application server (and not the server running the workers) resulted in the app servers queueing jobs that the workers didn't know how to process. </div><div class=""><br class=""></div><div class="">It would be great to hear what sort of issues/incidents you've come across while using async job queues like the above. I don't think I have enough examples to make any generalisations about the "right way" to use them yet, so more interested in just things that went wrong and how you fixed them at the moment.</div><div class=""><br class=""></div><div class="">Feel free to reply off-list if you'd rather not share with everyone, I intend to put the findings together in a blog post with a few guesses as to how to avoid these sorts of problems.</div><div class=""><br class=""></div><div class="">All the best,</div><div class=""><br class=""></div><div class="">-Ali</div><span class=""></span></div>_______________________________________________<br class="">Chat mailing list<br class=""><a href="mailto:Chat@lists.lrug.org" class="">Chat@lists.lrug.org</a><br class="">Archives: http://lists.lrug.org/pipermail/chat-lrug.org<br class="">Manage your subscription: http://lists.lrug.org/options.cgi/chat-lrug.org<br class="">List info: http://lists.lrug.org/listinfo.cgi/chat-lrug.org<br class=""></div></blockquote></div><br class=""></div></body></html>