
11 Jul 2022 Serverless Jenkins on Fargate with Kaniko Agent
Introduction
Jenkins is a popular CI/CD tool used by developers to build, deploy, and test their code. Enterprises can manage their own Jenkins servers in public or private clouds, or even on their own physical infrastructure.
Recently my team has been running a large Jenkins instance and multiple agents in AWS, with hundreds of pipelines. However, the expense of doing so quickly lead us to ask the question: do we really need all that infrastructure running overnight after the team has downed tools for the day? The answer, of course, was no.
In this post I’ll explain how we switched to an on-demand, serverless Jenkins environment on AWS Fargate, whilst still retaining the ability to build Docker images by using Kaniko.
Serverless Jenkins on AWS Fargate
Our starting point was this post. In it, a Jenkins master is built as a Fargate task without instances in a dedicated ECS cluster, and attaches shared EFS storage to these master tasks. The following diagram is taken from the post and describes the overall solution architecture:

In our case, we decided that the infrastructure resources would be deployed by AWS Terraform code. Terraform also provisioned the Jenkins plugins and the configuration of Jenkins nodes using a Jenkins configuration as code plugin in a Yaml file. It will build the initial pipeline from the file as well:

A New Problem
This looks good, but now we are facing a new problem. We have a bunch of applications built as Docker images. To build these applications we were running Docker commands in the Jenkins agent containers in our hosted instances.
However, ECS instances do not allow us to use a Docker daemon in a Fargate container. That makes sense, because there is a risk of impact to other containers in a shared cloud host from malicious (or vulnerable) activities running in a privileged container. Nevertheless, we needed a tool to build Docker images, even if we didn’t have access to Docker commands.
Kaniko
There are several tools which can achieve this goal. We chose Kaniko – an open-source tool to build container images inside a container or Kubernetes cluster – because we found it is easier to embed into a Jenkins agent than a tool like buildah. Kaniko is developed by Google, but is not an officially supported Google product.
Kaniko can build an image from a Dockerfile and push it to a registry. To quote from the documentation:
The kaniko executor image is responsible for building an image from a Dockerfile and pushing it to a registry. Within the executor image, we extract the filesystem of the base image (the FROM image in the Dockerfile). We then execute the commands in the Dockerfile, snapshotting the filesystem in userspace after each one. After each command, we append a layer of changed files to the base image (if there are any) and update image metadata.
— from Kaniko
The idea was then to build a Jenkins Kaniko image ourselves, put it as a Jenkins agent into a cluster, and lett it run the Kaniko command inside the agent to build and upload the image to the application registry:

Embed Kaniko into Serverless Jenkins
The code for embedded Kaniko into a serverless Jenkins Architecture can be found here. Kaniko recommends running kaniko commands in their official executor image. In our case we chose to manage our own image that is based on the official one.
Specifically, we copied the Kaniko binary and configuration files from the official image into an Alpine Jenkins agent image:
FROM gcr.io/kaniko-project/executor:debug AS kaniko
FROM jenkins/inbound-agent:latest-alpine
USER root
RUN apk --update add \
bash \
curl \
git \
jq \
unzip \
npm
#
# Add kaniko to this image by re-using binaries and steps from official image
#
COPY --from=kaniko /kaniko/ /kaniko/
ENV SSL_CERT_DIR /kaniko/ssl/certs
ENV PATH $PATH:/usr/local/bin:/kaniko
ENV DOCKER_CONFIG /kaniko/.docker/
ENV DOCKER_CREDENTIAL_GCR_CONFIG /kaniko/.config/gcloud/docker_credential_gcr_config.json
RUN curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" && unzip awscliv2.zip && ./aws/install
COPY files/Dockerfile.example /home/Dockerfile
COPY files/scripts/execute.sh /home/execute.sh
COPY files/config.json /root/.docker/config.json
RUN chmod 755 /home/execute.sh /home/Dockerfile /root/.docker/config.json
USER root
This Dockerfile can be found at modules/jenkins_platform/kaniko/Dockerfile
.
We then added the image to our Jenkins agent registry, which is referred to by the ECS Task Definition defined by our Terraform code.
How Does It Work?
Deploying this Terraform module required Terraform 0.14+, Docker 19+ and a VPC subnet in the AWS account. To use it yourself, the deployment steps are as follows:
- Create bootstrap resources in AWS for Terraform state files. Go to
example/bootstrap
, replacemy-state-bucket
andmy-lock-table
with your preferred names, and run the following commands:
terraform init
terraform apply \
-var="state_bucket_name=my-state-bucket" \
-var="state_lock_table_name=my-lock-table"
2. Modify the variables in the deployment script example/vars.sh.example
with the values created in the last step. TF_VAR_jenkins_admin_password
is the password set for the Jenkins ecsuser
. It will be added into SSM parameters loaded by Jenkins master:
#!/usr/bin/env bash
export TF_STATE_BUCKET="my-state-bucket"
export TF_STATE_OBJECT_KEY="serverless-jenkins.tfstate"
export TF_LOCK_DB="my-lock-table"
export AWS_REGION=""
PRIVATE_SUBNETS=''
PUBLIC_SUBNETS=''
VPC_ID=""
export TF_VAR_route53_create_alias="false"
export TF_VAR_route53_zone_id=""
export TF_VAR_route53_domain_name=""
export TF_VAR_jenkins_admin_password=""
export TF_VAR_vpc_id=${VPC_ID}
export TF_VAR_efs_subnet_ids=${PRIVATE_SUBNETS}
export TF_VAR_jenkins_controller_subnet_ids=${PRIVATE_SUBNETS}
export TF_VAR_alb_subnet_ids=${PUBLIC_SUBNETS}
3. Run example/deploy_example.sh
. This will build all the Jenkins resources for this project. After this build, you can find the path from the Jenkins load balancer. You can login with the admin password for Jenkins (stored in the SSM Parameter store with the name of jenkins-pwd
), and username of ecsuser
.
4. When you login to Jenkins, the Kaniko pipeline will be there. When the job is running, it will build an example application image from the simple Alpine image. The Dockerfile is defined under modules/jenkins_platform/kaniko/files/Dockerfile.example
. The result is then pushed to the application repository by the Kaniko build execution command (defined in /modules/jenkins_platform/kaniko/files/scripts/execute.sh.tpl
):
#!/bin/sh -e
/kaniko/executor --dockerfile=/home/Dockerfile \
--verbosity debug \
--insecure \
--skip-tls-verify \
--force \
--destination=${repository_url}/kaniko-artifact:release-x.x.x
Limitations
This solutions has a number of limitations that are worth noting:
- Kaniko does not support building Windows containers
- Running Kaniko in any Docker image other than the official Kaniko image is not supported officially
- When using the
--snapshotMode=time
argument, Kaniko may miss changes for files snapshotting due to issues with mtime - It cannot read the content from the
docker-compose.yaml
file - It cannot run integration tests when creating new containers, because it is designed to build images without running containers
Conclusions
In this post, I’ve described how a serverless Jenkins for AWS ECS Fargate architecture can be extended to use Kaniko to build Docker images without a Docker daemon. It is not perfect for all use cases; but provides an option for Docker container application builds in a serverless environment. For more information I encourage you to check out the Github repo.
No Comments