[LRUG] Compiling native extensions during deployment?

Thom May thom at may.lt
Sun Oct 23 07:41:45 PDT 2011


(ldndevops folk; gareth and i have been having a conversation about
deployment, thought it'd be worth crossposting. The full thread is at
http://lists.lrug.org/pipermail/chat-lrug.org/2011-October/006545.html)

On Sun, Oct 23, 2011 at 14:47, gareth rushgrove
<gareth.rushgrove at gmail.com> wrote:
> On 23 October 2011 14:24, Thom May <thom at may.lt> wrote:
>> So, why not packages? The state of the machine is managed by chef, so
>> pre/post scripts have little benefit (and we can use chef-solo for
>> easy idempotent post-extraction configuration).
>
> For what it's worth I agree with that, pre/post scrips can be used for
> ill, especially in a config management setup.
>
Yup; very much so.

>> Using tarballs lets us
>> have multiple versions of an app on the machine, so
>> rollback/rollforward is just a symlink flip.
>
> I actually do this with packages, (much to the horror of some pure
> package folks I have to admit). Each deployment gets a new unique
> package name, probably equivalent to whatever name you might be using
> for your tarballs. I then have a meta package which depends on the
> last x packages. I'm experimenting with the symlink being managed
> either in the meta package or in config management - not sure which I
> prefer yet but both work.
>

I've seen similar approaches, and to my mind there's a bit too much
indirection here.
With multiple package names you also lose some of the benefit of the
system tools, since you can't see directly which version of the app
you have deployed.
I also like the fact that our devs (who are all on OS X) can in most
cases download the exact same archive we're deploying and use it.
(this breaks down with compiled code, such as the OP's case of bundler
or our haskell applications, but for many things it works fine).

>> Our approach also decouples deployment from configuration management,
>> which i think is a good thing.
>>
>
> Ah, I think it's a bad thing :)
>
> The main reason I'm of this mind is an application has system
> dependencies (specific c libary or mysql version say). By modelling
> deployments inside config management tools you can model this in code
> in one place. Anything else either ends up saying it's not a problem
> (admittedly it's often not) and managing it manually or basically
> replicating the innards of chef/puppet in your deployment tool of
> choice (say Cap).
>

Ah. The details of the application's environment are in chef in our
model too - just that the act of deployment itself is not.
I'd suspect that the majority of deployments don't require the
environment to change at all.

> Do you have machines that drop out of the cluster download the latest
> tarball from the http endpoint when they come back up, or just destroy
> them (yay for the cloud) or leave them for manual intervention?
>

Not sure about this one yet; suspect destroy em is gonna end up being
the approach tho.

>> I completely agree that for system tools packages are the right
>> approach - we have internal packages for scribe, reconnoiter, etc -
>> but for deployments I think they're overly heavy weight.
>>
>
> I used to only do it at times due to tool chain woes, but FPM has made
> creating packages much more light weight.

Yeah, FPM is awesome. Use it for a lot of things, including rbenv.

I wonder if it'd be useful, either at LRUG or ldndevops, to have a
deployment roundtable so that approaches can be talked about.
-t



More information about the Chat mailing list