[LRUG] Intelligent load balancing
Riccardo Tacconi
rtacconi at gmail.com
Tue Mar 4 07:52:21 PST 2014
Have a look at https://github.com/igrigorik/em-proxy/, yes it is Ruby
(slow) but uses even machine (fast). I used it in production to load
balance an MySql cluster and it was working fine, but the load was little.
ha-proxy can remove a node using an HTTP check but I am not sure if you can
change dynamically the load on nodes depending metrics (node load).
nxing-lua seems a good solution too.
On 4 March 2014 15:31, Sam Phillips <sam at samsworldofno.com> wrote:
> Hi Ed,
>
> Mixlr did something along these lines before, where they scripted nginx
> with lua, looking up the data via redis:
>
> http://devblog.mixlr.com/2012/09/01/nginx-lua/
>
> I looked into it at Shutl and found the custom compilation etc of nginx to
> be a real pain, the lua/nginx modules to be a bit of a ghetto and of
> course, hard to test. In the end, we ended up segmenting in nginx based on
> the request itself. In our case, most requests were https/api/json with
> identifying headers, so were able to use that header to segment. A
> relatively small customer list (our customers are big retailers, rather
> than consumers) meant we could just configure which customers went where in
> chef and redeploy.
>
> HTH - this deployment stuff is my special area of interest and would be
> happy to kick around some ideas off list if helpful :)
>
> Cheers,
>
> Sam
>
>
>
>
> On 4 March 2014 14:29, Ed James (Alt) <ed.james.spam at gmail.com> wrote:
>
>> Hi all
>>
>> We make heavy use of AWS services and we are finding that ELB is not
>> quite meeting our needs. ELB allows some level of control over traffic, but
>> it's dumb in the sense that it's done purely on load. You cannot put any
>> real logic into ELB. What we want is to direct a user's requests based on
>> an application setting - this could be in the db, memcache, redis,
>> whatever. The retrieval of the setting is another problem I think. It's the
>> logic around that setting's value that I'm interested in.
>>
>> We are doing a large upgrade of our platform and we want to run both the
>> new version and old version in production in parallel. We want to control
>> which customers get to see the new version and slowly increase the number
>> of customer who can. If there is a problem we can just send all traffic
>> back to the old version in an instant. This could just as easily apply to
>> large feature deployments.
>>
>> Does anyone have any experience with this kind of use-case?
>>
>> Thanks,
>> Ed.
>>
>> _______________________________________________
>> Chat mailing list
>> Chat at lists.lrug.org
>> http://lists.lrug.org/listinfo.cgi/chat-lrug.org
>>
>>
>
> _______________________________________________
> Chat mailing list
> Chat at lists.lrug.org
> http://lists.lrug.org/listinfo.cgi/chat-lrug.org
>
>
--
Riccardo Tacconi
Ruby on Rails and PHP development - System Administration
VIRTUELOGIC LIMITED <http://www.virtuelogic.net/>
http://github.com/rtacconi
http://riccardotacconi.blogspot.com
http://twitter.com/rtacconi
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lrug.org/pipermail/chat-lrug.org/attachments/20140304/90493953/attachment-0003.html>
More information about the Chat
mailing list