Today I’ll be showing you how I got my feet wet with Kubernetes and deployed my PoC full stack web application to Google’s Kubernetes Engine.

But firstly what is Kubernetes?

You’re likely familiar with the concept of containerizing applications if you’re interested in Kubernetes.

Kubernetes helps us with deploying these containers into production by allowing us to automate the deployment of these containers and easily scale/manage them.

What’s the app?

Currently the app is made up of 3 containers

  1. Front End React App
  2. Java Jersey REST Service
  3. MySQL DB

current

The front end application and REST service are both stateless however the MySQL DB is stateful as it requires a volume to store its data.

It is possible to run stateful containers on Kubernetes through Persistent Volumes but things get harder when you want to scale.

However, we don’t need to run our database in Kubernetes, instead we can utilise Google’s Cloud SQL which simplifies our deployment and gives us a bunch of extra features out of the box.

Set up your new project and command line tools

  1. Go to Google Cloud Console, login and create a new project with billing set up
  2. Install gcloud
  3. Install kubectl using the freshly installed Google SDK gcloud components install kubectl
  4. Run gcloud init ensuring you select your new project

Uploading your images to Google’s Container Registry

First build your docker images. Here I’ve built my backend and frontend containers.

=> docker images
REPOSITORY                  TAG
poc-app/poc-frontend        latest
poc-app/poc-backend         latest

Then tag them in the format gcr.io/${PROJECT}/${IMAGENAME}:${TAGNAME}

Lets tag them as v1 of both our backend and our frontend.

=> gcloud config get-value project
poc-app-1234567891234
=> docker tag poc-app/poc-frontend:latest gcr.io/poc-app-1234567891234/poc-frontend:v1
=> docker tag poc-app/poc-backend:latest gcr.io/poc-app-1234567891234/poc-backend:v1

Then upload them

=> gcloud docker -- push gcr.io/poc-app-1234567891234/poc-frontend:v1
=> gcloud docker -- push gcr.io/poc-app-1234567891234/poc-backend:v1

You should now be able to see your images in the Container Registry

Creating your first Kubernetes Cluster

But first, what is a cluster?

Clusters are a collection of nodes and nodes are just computing power.

When you deploy an image you deploy it to a cluster and let Kubernetes worry about all the underlying management aspects such as which node it will run on.

Nodes that are similar in CPU, RAM, GPU and Zone are grouped together in Pools.

Why might you want multiple pools? Well, one common reason is that your application needs to be in multiple zones due to high availability requirements.

Lets make our First Cluster

First go to the Kubernetes Engine in the Google Cloud Console and click Create Cluster.

Next Select Your First Cluster and give it a name which you will use later.

If you take a look at the Node Pool that has been selected you will notice that it only contains 1 node of machine type small.

We only want 1 node for now because we are still testing this out and can scale later, however you might want to upgrade the machine type because you can’t do that later unless you recreate the cluster.

my-first-cluster.png

When you are happy with your pool configuration click Create

After some time your Cluster will have a green tick next to it.

cluster-ready

Next you want to run the following command to set up kubectl with the new cluster

=> gcloud container clusters get-credentials <CLUSTERNAME>

If you have multiple clusters already you might need to select this new context in kubectl

=> kubectl config current-context
=> kubectl config get-contexts
=> kubectl config use-context my-cluster-name

Create the Cloud SQL Database

For the backend in our cluster to work we need a database

  1. Go to the Cloud SQL page in the Google Console
  2. Click Create Instance and select MySQL, then select 2nd Generation and follow through the configuration options. (note: you might want to view the advanced options – specifically the machine type to save some cost)

create-db

Once your instance is created you might need to connect to it to initialise it. To do this

  1. Click on your new database instance on the Cloud SQL page
  2. Take note of the Public IP Address field in the Overview tab
  3. Click on the Connections Tab and Select Public IP as the connectivity type. Under Authorised Networks you want to put your current IP Address e.g. 190.60.241.198/32
  4. Connect to the database by your regular methods and run any sql scripts needed.

Next create a service account so you can access this database programmatically

  1. Go to the IAM Admin and click Create Service Account and enter a name e.g. database-service-account
  2. Select the role Cloud SQL Client
  3. Create a JSON Key and download it you will need this later.

create-key.png

Create Secrets for our Apps to use in Kubernetes

The app needs to know 3 secrets in order to run correctly.

  1. The Cloud SQL instance credentials

This is the JSON key you downloaded when creating the service account for the database

=> kubectl create secret generic cloudsql-instance-credentials --from-file=credentials.json=${INPUT.JSON}
  1. The Cloud SQL db login credentials
=> kubectl create secret generic cloudsql-db-credentials --from-literal=username=${MYSQL_USERNAME} --from-literal=password=${MYSQL_PASSWORD}
  1. A JWT Secret for the backend.
=> kubectl create secret generic jwt-secret --from-literal=secret=${JWT_SECRET}

These secrets are now visible under the Configuration section in the Kubernetes Engine

Deploy Front End to the Cluster

We now have all the building blocks required for our apps to run. Lets deploy to the cluster

When deploying to the cluster we create Pods. In most use cases you will have one container per pod. In the backend example we will see a use case to have more than one.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: frontend
  namespace: default
spec:
  template:
    metadata:
      labels:
        app: frontend
    spec:
      containers:
        - name: backend
          image: gcr.io/poc-app-1234567891234/poc-frontend:v1

Declared above is a yaml file called frontend.yaml which specifies how we will deploy the frontend container.

  • Take notice of the kind field it is stating that we are doing a deployment.
  • The metadata field is also interesting – we are giving the pod a name and a label which we will use later
  • In the spec we define what containers to run in this pod – its the frontend container that we uploaded at the start

We can deploy this pod by running the below command

=> kubectl create -f frontend.yaml

Deploy the Back End to the Cluster

In order to access our database from our backend we need to use the Cloud SQL Proxy to get secure access to our database without whitelisting.

Lets retrieve the database instance name so we can connect to the right database. You can retrieve this in the Overview Section of your CloudSQL Database it should look like this poc-app-1234567891234:australia-southeast1:poc-db

instance-name.png

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: backend
  namespace: default
spec:
  template:
    metadata:
      labels:
        app: backend
    spec:
      containers:
        - name: backend
          image: gcr.io/poc-app-1234567891234/poc-backend:v1
          env:
            - name: MYSQL_URL
              value: jdbc:mysql://127.0.0.1:3306/app
            - name: MYSQL_APP_USERNAME
              valueFrom:
                secretKeyRef:
                  name: cloudsql-db-credentials
                  key: username
            - name: MYSQL_APP_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: cloudsql-db-credentials
                  key: password
            - name: JWT_SECRET
              valueFrom:
                secretKeyRef:
                  name: jwt-secret
                  key: secret
        - name: cloudsql-proxy
          image: gcr.io/cloudsql-docker/gce-proxy:1.11
          command: ["/cloud_sql_proxy",
                      "-instances=poc-app-1234567891234:australia-southeast1:poc-db=tcp:3306",
                      "-credential_file=/secrets/cloudsql/credentials.json"]
          securityContext:
            runAsUser: 2  # non-root user
            allowPrivilegeEscalation: false
          volumeMounts:
            - name: cloudsql-instance-credentials
              mountPath: /secrets/cloudsql
              readOnly: true
      volumes:
      - name: cloudsql-instance-credentials
        secret:
          secretName: cloudsql-instance-credentials

Declared above is a yaml file called backend.yaml which specifies how we will deploy the backend container.

  • Look under the backend image at the env variables specified. You can see that we are passing in the secrets we created earlier to environment variables inside the container.
  • Notice how two containers are specified? app and cloudsql-proxy? This is a perfect example of when it makes sense to have multiple containers together as the proxy facilitates access to the database for the app. Apps in the same pod are visible to each other.
  • We need to use the JSON file we created earlier for the service user to pass to the cloud sql proxy so we use volumes to select the secret volume and volumeMounts to mount the credentials.json file to the /secrets/cloudsql directory on the cloudsql-proxy container

We can deploy this pod by running the below command

=> kubectl create -f backend.yaml

How do I see my app

If you go to the Workload section in the Kubernetes Engine you should see both the frontend and backend running at this time, however you will have no way to access them.

In order to access our application we will configure an Ingress Controller. There are many types of Ingress controllers but for this instance I used the Nginx Ingress controller.

You could also use a load balancer or node service to expose these services but an ingress controller will allow these services to appear under the same IP Address.

To install the nginx ingress controller I ran the following. However you should follow the installation guide.

=> kubectl create clusterrolebinding cluster-admin-binding --clusterrole cluster-admin --user $(gcloud config get-value account)
=> kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml
=> kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/cloud-generic.yaml

We can then configure our Ingress Service via the following yaml

kind: Service
apiVersion: v1
metadata:
  name: frontend-node-service
spec:
  type: NodePort
  selector:
    app: frontend
  ports:
  - port: 5000
    targetPort: 5000
    protocol: TCP
    name: http
---
kind: Service
apiVersion: v1
metadata:
  name: backend-node-service
spec:
  type: NodePort
  selector:
    app: backend
  ports:
  - port: 8080
    targetPort: 8080
    protocol: TCP
    name: http
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-service
  namespace: default
  annotations:
    kubernetes.io/ingress.class: nginx
    kubernetes.io/ingress.global-static-ip-name: "web-static-ip"
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - http:
      paths:
      - path: /*
        backend:
          serviceName: frontend-node-service
          servicePort: 5000
      - path: /api/*
        backend:
          serviceName: backend-node-service
          servicePort: 8080

Declared above is a yaml file called ingress.yaml which specifies how we will deploy the ingress controller.

  • The first two definitions in this file are NodePort Services which expose the apps named frontend on port 5000 and backend on port 8080.
    • port refers to what port the application is accessible on internally whereas targetPort refers to the port your application is exposing itself on. For simplicity I’ve kept them the same.
  • The final definition is the Ingress Service where we will serve all traffic on the /api/ path to the backend and all remaining traffic to the frontend.
  • The annotations configure our ingress controller.
    • kubernetes.io/ingress.class states that we are using the Nginx ingress controller
    • nginx.ingress.kubernetes.io/rewrite-target will rewrite requests e.g. /api/login will be forwarded to the backend service as /login
    • kubernetes.io/ingress.global-static-ip-name allows us to expose under a static ip address called web-static-ip.
      • To create a static ip run gcloud compute addresses create web-static-ip --global. You need to remember to delete this later because deleting the cluster doesn’t remove this.

NOTE: A health check is done on all of your Services in your Ingress Controller. They should all return 200 OK on the /route otherwise the controller will not work.

We can deploy this controller by running the below command

kubectl create -f ingress.yaml

Congratulations

Your app should now be visible via the ingress controller!

new.png

Go to the Discovery Page in the Kubernetes Engine and look at what IP Address your load balancer has been given!

Extra

You might also want to play around with the scaling capabilities of Kubernetes now that you are all set up.

Start by increasing your cluster size to 2, wait for the green tick to appear to let you know your cluster is ready.

You can then go to each of your pods and either scale manually or set up auto scaling.

After scaling you should see that the number of pods has increased for the scaled workloads.

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s