We've now delivered our application manually - congratulations! - but this isn't an ideal, long term solution. At least not for an application that's going to grow in complexity and needs to be continuously deployed based on inbound changes. To accomplish this we're going to be using GitLab CI.
We'll start with the IAC (Terraform), followed by the application (Golang) and finally we'll complete the solution by automating the CAC (Ansible.)
Implementing Continuous Delivery¶
We have three projects that we'll be focusing on:
- The application
And we'll be building CI pipelines for each one of them. These pipelines will react to changes to key file types inside of their respective repositories:
| || |
| || |
configuration/ansible pipeline doesn't get triggered when there's a change to the code. Instead we trigger this ourselves manually (for now) when we've successfully executed the
application/httpcats pipeline and we want to deploy a new copy of the application.
Overall this is the order our pipelines execute in, one-by-one, based on a trigger above or manually in the GitLab UI:
graph LR a[Terraform] --> b b[Application] --> c c[Ansible]
Just to be (super) clear:
ansible are all separate pipelines in the above diagram and they do not trigger each other. They're triggered manually. What we're depicting above is the order in which we trigger each pipeline.
Let's go into a bit more detail into what each of the above pipelines is going to be doing at the high level.
Before we can deploy anything we need infrastructure to deploy it to. We'll do the following:
- Build the VPC and the networking
- Get the DNS in place
- Implement the ACM certificate
- Provision the load balancer
- Provision the compute resources
Then we can provision the application.
We're going to get the following tasks completed:
- Compile the application
- Archive the binary and the static files together
- Publish the artefact to the GitLab
After that, we have something to deploy.
Finally we can deploy our application using a simple Ansible Playbook:
- Pull the application package from the application repository registry
- Execute the Playbook, provisioning the systems and deploying our application
And after that we will be live.
We've discussed GitLab previously but to summarise: it manage our git repositories for us and now it's going to provide us with a Continuous Integration (CI) solution we can use to automated the delivery of our code.
GitLab's CI offering is very powerful. It can handle virtually any use case. It can most certainly handle our simple use case. To implement it as a solution we're going to be creating
.gitlab-ci.yml files in each of our repositories.
.gitlab-ci.yml file is used to set up a CI pipeline for the given repository. When you commit and change to your repository and push it to your remote, GitLab looks for this file and, if found, parses it, builds a pipeline and executes the pipeline.
GitLab CI has a concept known as a "Runner". This is an agent that runs on a compute resource, like an EC2 Instance, a Virtual Machine on a physical cluster, or a RaspberryPi, and it checks for new CI pipelines that need to be executed. Once it finds one, it downloads the repository and runs the pipeline.
GitLab.com gives us Runners for free and limits us to the number of minutes we can use per month. We can also run our own Runners in our own private infrastructure free of charge. The Runner itself is actually open source (like GitLab).
In the first phase of this book we're going to be using the free Runners. In a future phase we're going to get serious with security and our needs will grow more complex, so we'll eventually transition to private Runners inside our AWS networks.
In the next section we're going to start by looking at the pipeline we'll build for Terraform.