Evolution in the CI ecosystem

The CI ecosystem is rapidly changing. Adapting isn’t compulsory, but neither is survival. Can the dinosaurs evolve or will they be outmatched by new species?

Growing up with and growing out of Jenkins


For the first half of my career in software development build servers weren’t even a thing. I can’t even remember what we did before they arrived, other than vague memories of archaic version control systems.

But then along came Jenkins, having just left its Hudson roots behind. Suddenly we had builds - monolith nightly builds and manual releases - but hey! Builds!

Over time the builds grew in complexity and we were running unit tests and style checkers. We were even running on every commit - how radical! Integration tests were added and then regression tests against real databases. I got heavily involved with maintaining a Jenkins setup and the nightmare of managing test databases on different slaves with aging hardware. I got my bruises and earned my chops.

This involvement with Jenkins also led to my path intersecting with Praqma on numerous occasions over a few years - eventually leading up to a career change. In 2014 I became a full time Praqma consultant - living, breathing, preaching and supplying continuous delivery every day.

The ecosystem is changing fast

The asteroid didn’t wipe out the dinosaurs, it just changed the ecosystem so fast that they couldn’t evolve to stay competitive

Since moving to Praqma it has become increasingly obvious to me that the world is changing faster and faster, especially in technology. This drives an endless pressure on the software industry. At the same time more and more industries are waking up and discovering that they are in fact software industries.

Those of us working in the software industry need to build software faster and better. To do that we need faster and better tools. We need build pipelines as code, scripted environments, and scalable infrastructure. In the DevOps mindset we want to empower developers and give them maximum access and control of the build and deployment tool chain. Containerized environments in Docker, cloud hosting, and orchestration tools have gone from specialized operations tools to standard tools in the developer’s toolbox.

Tools and processes always need to evolve. Having a job where I focus so closely on this area means that I often spend time discussing the pros and cons of finding the best tool for the job. I’ve become much more focused on how my trusted go-to tools are adapting to the changing environment. Are they still up to the job? It’s a question we should ask of our tools on a regular basis.

At Praqma, Jenkins was still our go-to-guy. It was the build server we used internally, the one we used with many of our customers, and it was evolving fast. The Docker plugin was introduced to support dynamic container-based build agents. After a number of other community attempts JobDSL gained a foothold as the standard way of writing build jobs and pipelines as code. Meanwhile, ever more plugins were addressing the various needs for cloud infrastructure.

This decade old beast was slowly adapting to the environmental changes occurring on the plains of software development.

Last year I started speculating and reading about some of the new creatures on the savannah. Build servers and services were being born into a world where the modern demands mentioned above were already a thing. I wondered if some of them could bring new perspectives and possibly compete with the old traditional build servers.

In 2016 we held one of our recurring Praqma code camps. Many of the suggested topics were about investigating various competing build servers, so we decided to bunch them up in a 2 hour shoot-out session. Drone.io, Concourse CI, Circle CI, Travis, GoCD, CodeShip & LambdaCD were among the candidates.

The workshop - a Tinder approach to new build servers

Concourse, 3 years old - on Tinder

We decided to split up into pairs and investigate a build server each for two hours. The goal was to get a basic build running, ideally a simple pipeline and some attempt at our recommended pretested integration workflow if time allowed. We learned as much as we could in the short time available before evaluating the tools against a few different criteria. We agreed to come back to the group after two hours and present the tool for 5-10 minutes. Here are some of the questions we had in mind:

Is it Cloud hosted or self hosted? Open source or commercial?

Does it support some form of pipelines as code?

What is the level of integration with third party tools and services and does it have a plugin architecture?

How mature does it seem right now and how active is the project?

How is the visualization of pipelines? Is it sexy? What are its key selling points?

In short - let us see which ones tickle our curiosity. Two hours isn’t much, but it should give an idea, a gut feeling.

Some teams had a really tough time getting anywhere in two hours, some build servers were easy but boring, others were very niche (LambdaCD anyone?), but all had some positives and some negatives.

Swipe right on Concourse

One of the servers I had suggested for inclusion was Concourse CI. It had been mentioned by my co-worker Andrey on Slack, so I had taken a quick look and immediately found some aspects that sounded interesting. They promised first class scripted pipelines, Docker-based isolated environments and full interaction from the command line. And the screenshots indicated a nice visualization of pipelines far beyond what Jenkins had.

Concourse CI Screenshot

I was also on the team evaluating Concourse for the workshop, and ended up really liking what I saw.

The whole architecture of Concourse is built around 3 core concepts: Tasks, Resources and Jobs.

A Task is any action you want performed, i.e. a script of some sort executed in a well defined environment.

Resources abstract anything that a task needs or outputs. A git repository or an artifact server are the obvious examples, but even time or version numbers are modeled as resources.

Jobs are conceptually functions with inputs and outputs. A job depends on a number of resources, e.g. a git repo runs some tasks and typically produces some output - again a resource. It runs whenever new input is available. A resource that takes output from one job can be input to another, and thus we have pipelines.

Concourse is very clearly born in an age where Docker is already a tool of the trade - therefore Docker is the default behavior. By default tasks are executed in a specified Docker environment. A container is created on demand and thus provides a clean, well defined and repeatable environment. Resources are also Docker containers - always with three well defined entry points (check, in & out) implemented as “scripts” on well defined paths.

This means that Concourse itself has no ingrained knowledge or special handling of things like source code or file shares. Tasks just consume resources and outputs to resources that all implement the standard interface. Any input and output resources defined for a job are just made available as named folders inside the Docker container where the task is executed. A typical task could read source code from an input folder, build it, and write generated binaries to an output folder.

All the components of your pipeline are defined in human readable Yaml files and uploaded to the server via a command line tool called fly. Write your pipeline and upload it with fly set-pipeline.

hello.yml

jobs:
- name: hello-world
plan:
- task: say-hello
config:
platform: linux
image_resource:
type: docker-image
source: {repository: ubuntu}
run:
path: echo
args: ["Hello, world!"]

For a modern day continuous delivery setup we absolutely believe that developers must have access to production like environments. Running your build on your own developer laptop might give very different results to those on your build server. Maybe you need to run a suite of tests in an environment that can’t be replicated locally, or you need machine power far beyond the constraints of even a powerful laptop. In these cases, Concourse has you covered.

The fly command can execute any task that you have defined directly on your Concourse environment, even before it has become part of a pipeline. I can do this with my local source code, directly from my shell, and it executes on the real build infrastructure and provides output right back to my shell. This is also very useful for testing individual tasks before putting them together in jobs and pipelines.

When I look at our Continuous Delivery Maturity Model, Concourse seems to provide viable answers, especially to some of the questions that are really hard to address with other tools.

Is Concourse going to rule the plains?

Asking if Concourse will win the game over the long run is like asking if the badger sized mammals that roamed among the dinosaurs of old would take over the world. In the hindsight of 65 million years we know that those individual species are no longer around. They evolved and were eventually succeeded by other species. But, they did herald what was to come - you could see them as templates of a new pattern - a new approach to survival. The species themselves didn’t survive, but mammals in general have made their undeniable mark on the world. The initial pattern and concept was sound but newer implementations slowly optimized on the design.

I currently view Concourse CI in the same light. I strongly believe that Concourse, and other new build servers, are showing us entirely new strategies for adapting to the changing landscape of modern software development. Concourse itself is going strong and improving fast. I expect it to survive for many years. It will also at some point be replaced or outmatched by something new. However, I do believe that it is a good example of how new conceptual answers (survival strategies) will be followed and optimized by others.

Can Jenkins keep up?

The classic monolithic build servers of old, exemplified by Jenkins, are putting up a good fight.

The world wants isolated containers - we give them a Docker plugin that creates Jenkins agents on demand - but they are still agents and Jenkins still uses standard ssh connections to talk to them once they are up. It works, but it isn’t really what Docker was designed to do. Docker is meant to isolate single processes, not simulate virtual machines.

The world wants scripted pipelines - we give them Build flow DSL - then Job DSL, but they are still just creating static freestyle jobs. Then the concepts of scripted pipeline and declarative pipeline come along. We free ourselves from the shackles of freestyle jobs and upstream/downstream dependencies, but also lose our ties to much of the existing plugin ecosystem that is no longer compatible.

Both Cloudbees and the community are doing amazing jobs to adapt Jenkins to the modern world, and similar stories could be told of other classic build servers like TeamCity and Microsoft’s TFS/VSTS.

So, I do believe that Jenkins and its old competitors still have merit. Jenkins is an amazing tool with a huge ecosystem of plugin support for nearly everything. It can solve just about any CI/CD need that you might have. In many cases it can still do so easier than newcomers like Concourse that have years to go before they match this level of third party integrations. But I will still claim that these older build servers might very well be the dinosaurs of today. They are big, they are badass, they rule the world… but can they adapt?

Didelphodon - a badger sized Mesozoic marsupial
Didelphodon - a badger sized Mesozoic marsupial 1

Betting against the odds

With Jenkins still being the biggest player on the build server plains, and the main runner-ups following the same overall master-slave architecture, it should be a fair bet that they are here to stay. The last few years have shown Jenkins run full steam ahead to keep up.

Personally, though, I would bet against the odds…

I see the small badger-like marsupial underfoot of the dinosaurs evolving and becoming the lions and tigers of the future…

You cannot see the extinction event before you actually die out

Velociraptor skeleton
Velociraptor skeleton 2


References

Cover photo:Iguanodon - historic picture by Heinrich Harder

Published: Sep 28, 2017

Updated: Jul 19, 2024

CI/CD