[LRUG] Continuous * (Happy New Year!)

Samuel Joseph tansaku at gmail.com
Wed Jan 9 02:08:29 PST 2019


Hi Matthias,

Great to hear from you.  See the email reply I just sent to Gareth about 
why we are rebasing probably answers your first question - it's to 
create a nice clean single threaded git history.

To address your other questions we could be feature branching off 
master, but we feature branch off develop - all our developers work off 
the develop branch, and features get rebased into develop squashing all 
their commits into a single easy to read summary - see 
https://github.com/AgileVentures/WebsiteOne (we've 100% open source and 
open development).  Then when we are ready to move to a deploy the 
changes in develop get rebased to staging and eventually to master.

I guess we could be deploying to both staging and production from the 
master branch, but by having the separate develop, staging and 
production branches, that each auto-deploy to the three servers, it 
means that we've got a clear place to go (locally and in GitHub) to see 
a running version of whatever is on each of those three branches, and if 
for some reason production has a weird issues and we have to hot fix 
(fairly rare) then we have the staging branch to do it on, safe in the 
knowledge that a hot-fix applied to staging can be auto-deployed to the 
staging server, and won't accidentally make thngs worse on production 
(which is of course auto-deployed off master).

The sanity checks I mention are automated to an extent, so, for example 
we have some pages involving some complex javascript and we have 
acceptance tests (that run end to end browser based checks) for them, 
integration, front end unit tests, back end unit tests, etc. but painful 
experience has shown that even when all the tests are working and the 
functionality is working manually on developer machines, we can still 
end up with things breaking here and there due to the vagaries of the 
settings and architecture in our legacy codebase.  I'm sure our codebase 
is much smaller and less complicated that what many on the list manage 
every day, but I definitely get caught out once every six months or so.

Now maybe the solution should be fixing those underlying configuration 
issues, or just a better overall design of the codebase - more precision 
here there or somewhere - I'd love to have more resources to put into 
that.   Over the years we've invested chunks of time in, say, 
overhauling all of the acceptance tests to try and make them more 
reliable, careful picking apart of certain sub-systems, but I still 
don't feel 100% confident without several manual checks - as my 
confidence increases between incidences that 20 minute check drops to 10 
and then sub 5mins, but then we have an incident and it spikes again.

There's certainly not anything new and creative being done each time - 
it's just a reflection of my lack of absolute faith in the tests and 
codebase :-)

What I am starting to sense is that other folks have their staging and 
production servers both deploying from the master branch, and that 
following a successful deploy to staging, that's followed by a 
deployment of master to production.  But then, does no one do any manual 
tests on staging before there's a deploy to production?  What's the 
trigger that says it's okay to deploy to production following a staging 
deploy?

Thanks for your great questions!

Best, Sam

On 08/01/2019 11:19, Matthias Berth wrote:
> Hi Sam,
>
> Why are you rebasing in the first place?
> Can't you just make a feature branch off master, then merge it back 
> into master? And deploy to staging and production from the master branch?
>
> I also wonder why your 20 minutes sanity checks cannot be automated. 
> Are you doing something new / creative every time you do these tests?
> Great discussion!
>
> Cheers
>
> Matthias
>
> On Tue, Jan 8, 2019 at 11:48 AM Gareth Adams <g at rethada.ms 
> <mailto:g at rethada.ms>> wrote:
>
>     Sam, I don't have a grand answer to your whole question, but a
>     phrase leapt out at me and I wanted to flag it:
>
>     > rebasing along the pipeline
>
>     To me, this suggests the code on your staging branch is not the
>     same as the code you end up deploying to production (and it might
>     not be the same as the code in your master branch)
>
>     I guess there are a few reasons that could be: you're storing some
>     environment-specific configuration on your environment branches,
>     and can't merge them all together, or maybe your environment
>     branches contain a different combination of feature branches that
>     you're trying to keep control of?
>
>     Either way, I'd consider how (or if) you could change your
>     workflow to make sure you deploy the same code everywhere (or at
>     least deploy the same code to production that you deploy to
>     staging). That's the basis behind e.g. Heroku pipelines' "Promote"
>     button and it's the pattern I commonly see now
>
>     On Tue, 8 Jan 2019, 10:13 Samuel Joseph <tansaku at gmail.com
>     <mailto:tansaku at gmail.com> wrote:
>
>         Hi Gerhard,
>
>         On 07/01/2019 12:00, Gerhard Lazu wrote:
>>         Hi Sam,
>>
>>         What determines that a build can go from your development
>>         environment into staging?
>         Good question, the answer is:
>
>         1) that all the unit, integration and acceptance tests pass
>         2) that there are no merge conflicts
>         3) that the manual sanity checks on develop are coming back okay
>>         And from staging into production? If you can capture this in
>>         code, you can put it into a pipeline.
>>
>         I don't think there's any way we can remove the manual sanity
>         checks as the acceptance tests are just not that reliable, and
>         although we've poured 1000's of hours into them and ultimately
>         I can't see any way of making them perfect.
>
>         I didn't think the presence of a manual step would prevent us
>         using a pipeline, in as much as I thought of a pipeline as
>         just being a series of servers with matching branches and code
>         is them moved along them whether manually or automatically. 
>         Heroku calls such things pipelines and seems to have no
>         support for automatically moving code along them, it's purely
>         manual from what I can see.
>>
>>         Why do you have 3 pipelines?
>
>         I don't think we do.  As I understand it, we have one pipeline:
>
>         develop branch + develop server ---> staging branch + staging
>         server ---> master branch + production server
>
>         That's three paired branches/servers in one pipeline.  Here's
>         a screenshot of how Heroku presents our pipeline in their
>         pipeline interface.  Note the button "Promote to staging"
>         which allows you to manually move the code on the develop
>         server to the staging server, but doesn't actually do a rebase
>         of the code from develop branch to staging:
>
>>         Based on the questions that you're asking, I believe that it
>>         would help if you had a single pipeline.
>         I agree - I think we do, but perhaps I'm wrong ...
>>         The question that I would focus on is /What would it take to
>>         have a single pipeline that has an end-goal of creating
>>         production builds/? Here is a pipeline example which stops
>>         after it publishes a Docker image: changelog.com, CircleCI
>>         <https://circleci.com/workflow-run/065467ef-87c0-4f5e-a2ab-5e11be12403f>.
>         Ooh, thanks for sharing! I had to log in to CircleCI to see that:
>
>         but that looks really interesting.
>>         If you are using something like Docker Swarm or Kubernetes,
>>         the platform/ecosystem has all the necessary tools to keep
>>         deployment concerns self-contained. In the changelog.com
>>         <http://changelog.com> case, the Docker stack that captures
>>         the entire deployment has an update component that is
>>         responsible for app updates
>>         <https://github.com/thechangelog/changelog.com/blob/cf2ebe0de0f35c96bf664b8bc9183bd1f3468565/docker/changelog.stack.yml#L15-L31>.
>>         In this specific case, if the new app version starts and is
>>         healthy for 30 seconds, it gets automatically promoted to
>>         live. We have been using a similar approach since October
>>         2016 <http://changelog.com/podcast/254>, a Docker stack just
>>         makes it easier.
>>
>>         I want to spur your imagination by sharing the pipeline that
>>         is responsible for RabbitMQ v3.7.x
>>         <https://ci.rabbitmq.com/teams/main/pipelines/server-release:v3.7.x>.
>>         This pipeline captures what is possible if imagination is set
>>         free:
>         Wow, that RabbitMQ pipeline looks amazing - and I've only
>         captured part of it in the screenshot:
>
>>
>>         * tests & builds 30+ apps...
>>         * on all supported major runtime version...
>>         * and all supported OSes
>>         * tests upgrades
>>         * tests client support
>>         * releases alphas, betas, RCs & GAs
>>         * and publishes to all supported distribution channels
>>
>>         I hope this helps, Gerhard.
>
>         That's extremely helpful - thankyou!
>
>         But so just to be clear, there is something in these pipelines
>         that you're sharing that regularly moves code from one branch
>         to another?  And that's something that CircleCI and RabbitMQ
>         provide?  Or these are pipelines where the same code in the
>         same branch is being moved through a series of servers, based
>         on tests and checks passing at each server?
>
>         Many thanks in advance
>         Best, Sam
>
>         _______________________________________________
>         Chat mailing list
>         Chat at lists.lrug.org <mailto:Chat at lists.lrug.org>
>         Archives: http://lists.lrug.org/pipermail/chat-lrug.org
>         Manage your subscription:
>         http://lists.lrug.org/options.cgi/chat-lrug.org
>         List info: http://lists.lrug.org/listinfo.cgi/chat-lrug.org
>
>     _______________________________________________
>     Chat mailing list
>     Chat at lists.lrug.org <mailto:Chat at lists.lrug.org>
>     Archives: http://lists.lrug.org/pipermail/chat-lrug.org
>     Manage your subscription:
>     http://lists.lrug.org/options.cgi/chat-lrug.org
>     List info: http://lists.lrug.org/listinfo.cgi/chat-lrug.org
>
>
> _______________________________________________
> Chat mailing list
> Chat at lists.lrug.org
> Archives: http://lists.lrug.org/pipermail/chat-lrug.org
> Manage your subscription: http://lists.lrug.org/options.cgi/chat-lrug.org
> List info: http://lists.lrug.org/listinfo.cgi/chat-lrug.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lrug.org/pipermail/chat-lrug.org/attachments/20190109/956a2688/attachment-0002.html>


More information about the Chat mailing list