GitOps — Weaveflux and delivery (part 1)

I stumbled into the idea of GitOps while looking for an easy way to deploy and keep updated a system of about 20 microservices — something at the time I would consider fairly complex.

What is GitOps?

GitOps is the idea of using a Git repository to describe the state of your container infrastructure and leave it to an agent to maintain that state in practice. It has three components: a Git repository, a Docker registry and an agent. The usual flow means that you build a container, the agent updates your services using that container and then updates your Git repository accordingly.

What’s the difference between GitOps and IaC?

Infrastructure as code is more comprehensive in its general scope, handling provisioning infrastructure as a main goal while being able to handle more things as secondary tasks. GitOps is more a logical extension of the CI/CD process.

What’s the advantage?

Your classic CI/CD flow: commit your code -> pipeline runs your tests -> pipeline builds your containers -> pipeline deploys your containers

The issue here is the last part. When using Kubernetes (or Docker Compose, or Swarm), you also have a bunch of yml/json files that describe your infrastructure and need to be updated (presumably by the pipeline). Complex deploys need a lot of code for that pipeline. Also, the CI agent needs access to your clusters (even if it runs inside cloud clusters, where systems like Azure or GKE offer a sort of implicit access when using something like Jenkins’ Kubernetes plugin, for example).

With GitOps, you have an agent that sits insider your cluster. Its credentials for access to the Docker registry are the same as anything else running inside the cluster (in Kubernetes you can RBAC your way to safety) and a secret can manage access to the Git repository. The agent is secured in the same way as your cluster, no extra overhead to manage credentials and so on (in fact, it’s a Kubernetes service).

The process becomes: commit your code -> pipeline runs your tests -> pipeline builds your containers

From there on, the agent will detect new container tags and deploy based on various rules. The agent can detect interference with its management, provide notifications and ensure the state of the system.

Can it Helm?

Flux, by WeaveNet (the guys that brought you the famous Kubernetes network overlay) also has a Helm Operator.

I’m not a fan of Helm. It’s ok for application management and its charts where others have done the hard work of figuring out reliable ways to setup various third party tools on Kubernetes … they are golden. But configuration management for production application stacks is a nightmare and Tiller is a security issue at the very least. In my world, I prefer to setup a Git repository per environment and have a Flux agent for each environment to manage things.

It’s not quite DRY (in fact, it’s very WET) but beats a lot the overhead of Helm and configuration.

How to get started?

  • Clone the Flux repo:

git clone
cd flux

  • Edit the deployment:

vi deploy/flux-deployment.yaml

  • Edit the args of flux command (all prefixed by double dashes)

git-url=<git repo>

git-branch=<git branch which the agent checks for changes and must be able to update according to deploys>

registry-ecr-region=<region of an ECR registry, if using AWS>

registry-ecr-include-id=<name of your account, if using AWS ECR — it is part of repo path>

k8s-secret-name=<name of secret that contains the SSH key for git access, also used in git-key volume)

k8s-secret-data-key=<name of the key, as generated by ssh-keygen — the filename basically>

k8s-secret-volume-mount-path=<where to mount the key, usually /etc/fluxd/ssh>

  • kubectl apply -f deploy/flux-deployment.yaml

In the next part, I will provide a step by step tutorial in making it all work with Jenkins CI.