Principles for successful CI+CD over the long haul

Principles for successful CI+CD over the long haul

Loose coupling is not only a good software design principle, it also happens to work well in my relationship with build process automation systems. I’ve found myself in the unenviable position of having to migrate from one CI tool to another in short order.

For me this has been because of two reasons:

  • The vendor is no longer supporting the CI+CD software for business reasons on their end (vendor hosted solutions).
  • The way we were using the CI system is now deprecated. This could be as small as a plugin change or a whole-system refresh.

I get it. Things move forward. APIs change and systems must evolve. But every time this happens it’s an involved process to decide what to do next. Code that hasn’t been touched in months of years must be fiddled with once again just to maintain status quo.

In this world, my fundamental needs haven’t changed … but the technical ground that I have been standing on has shifted. Washed away by the tides of progress. Ever-marching towards more interactive UIs.

So, here are some principles for consideration when it comes to using a CI+CD tool – or even evaluating the proposed “Features” that some new system offers.

Principal #1 - If the CI tool is down, there must exist a way to deploy code manually.

It’s not too often that the build server is down; however, when it is … you’re stuck until things are resolved on the CI+CD server. Maybe it was disk-space this time. Maybe enough memory hasn’t been allocated to the build-server. Maybe there’s just a surge in volume because more and more projects are being handled or a whole grip of PRs are being pushed through the system. Whatever the problem is, the project that’s on the books for tonight’s release ain’t getting pushed through until the system is fixed.

That is unless your build process can be manually orchestrated in the form of a bash script or a collection of bash scripts that can be executed from beginning to end.

I’d suggest the following:

  1. The compilation steps can probably be performed inside of a Docker container. This reduces the local-system versus the build-system variance in terms of executables and environment.
  2. The unit-tests can be executed inside of a fully-built image that’ll be shipped up to production.
  3. The deployment to the target environment can reside in a publish script. If your team is already leveraging docker then the scripts necessary to publish to a target environment are fairly trivial.

If you’re feeling ambitious … an entire docker-image could be constructed to handle the process from end-to-end.

Principal #2 - Jenkins (or another) may be the tool for today; however, the build process is here to stay.

There are plenty of Continuous integration and Continuous deployment software solutions available. Many of them do an excellent job too. Some are commercial, some are open-source. Some are self-hosted while others are vendor-hosted.

A small list of build-process automation tools (CI / CD)

  • Atlassian Bamboo.
  • Circle-CI.
  • Travis-CI.
  • Buildkite.
  • Jenkins
  • and so on…

I’ve used every one of these tools and they all have one thing in common. And that common thread is that I’ve switched from one tool to another tool as some point in a project’s lifecycle.

What have I learned along the way?

I’ve learned not to write the integration process in a way that is closely coupled to the build tool’s language (ahem, some obscure Groovy Domain Specific Language) or be overly reliant on plugins in the core areas of the build process itself. Sure, I’ll leverage the slackNotification plugin in Jenkins. I’ll also leverage Warnings-NG for displaying the checkstyle results and so on. But I’m not going to assume that the plugins will remain stable long and the same in terms of the API that they expose well into the future. I’m also not going to use the declarative pipeline for every aspect of the build. As much as it’s promoted as being some special-sauce of that tastes amazing for every project … it really isn’t. It’s cool. It’s better than scripted pipelines, I suppose; however, it couples the whole process to a Jenkins-specific implementation of how a build process ought to be done.

Or, put another way … I now write much of my build scripts in the most common scripting language appropriate for the project at hand. And the choices are simple.

For Linux hosted projects I use Bash. For Windows-based projects I’d use Powershell. I don’t have any of those, but that’s what I’d do. For a Mac OS based project, I’d still use bash.

With this in mind the portion of scripting that my CI tool owns is merely a wrapper covering my build process with a smattering of convenience and visual appeal. The CI tool provides exactly what I want it to – A place that actions off of some github notification event and makes a complex process visually comprehensible.

Principal #3 - Plugin updates can cause outages.

Service outages are a fact of life. Service outages on the build and deployment systems can be a total productivity killer. When I see that update notification, I have to ask the question, “Will this break things?”. And, honestly, I can’t confidently answer that question without just updating and seeing what broke. Most of the time it’s nothing. Some of the time something breaks.

So this brings me back to Principal #1 and Principal #2 – Over-reliance on the CI+CD system is dangerous and should be avoided.

Should I upgrade

The real question becomes, which of these is most likely to break as a result of upgrading X plugin or even the CI tool itself. We need to keep things updated for security, bug-fix, and feature-enhancement reasons; however, doing so always involves some amount of risk.

Which will break?

When there are many projects on the Jenkins server, change risk becomes even greater. A project that was build months ago may no longer be completable – and you won’t know right away either that it’s broken. No no, that’ll be hidden until the day you really need that process to run through from end-to-end.

Jack Peterson