Deploying a full stack application to Google Kubernetes Engine

Deploying a full stack application to Google Kubernetes Engine

Today I’ll be showing you how I got my feet wet with Kubernetes and deployed my PoC full stack web application to Google’s Kubernetes Engine.

But firstly what is Kubernetes?

You’re likely familiar with the concept of containerizing applications if you’re interested in Kubernetes.

Kubernetes helps us with deploying these containers into production by allowing us to automate the deployment of these containers and easily scale/manage them.

What’s the app?

Currently the app is made up of 3 containers

  1. Front End React App
  2. Java Jersey REST Service
  3. MySQL DB

current

The front end application and REST service are both stateless however the MySQL DB is stateful as it requires a volume to store its data.

It is possible to run stateful containers on Kubernetes through Persistent Volumes but things get harder when you want to scale.

However, we don’t need to run our database in Kubernetes, instead we can utilise Google’s Cloud SQL which simplifies our deployment and gives us a bunch of extra features out of the box.

Set up your new project and command line tools

  1. Go to Google Cloud Console, login and create a new project with billing set up
  2. Install gcloud
  3. Install kubectl using the freshly installed Google SDK gcloud components install kubectl
  4. Run gcloud init ensuring you select your new project

Uploading your images to Google’s Container Registry

First build your docker images. Here I’ve built my backend and frontend containers.

=> docker images
REPOSITORY                  TAG
poc-app/poc-frontend        latest
poc-app/poc-backend         latest

Then tag them in the format gcr.io/${PROJECT}/${IMAGENAME}:${TAGNAME}

Lets tag them as v1 of both our backend and our frontend.

=> gcloud config get-value project
poc-app-1234567891234
=> docker tag poc-app/poc-frontend:latest gcr.io/poc-app-1234567891234/poc-frontend:v1
=> docker tag poc-app/poc-backend:latest gcr.io/poc-app-1234567891234/poc-backend:v1

Then upload them

=> gcloud docker -- push gcr.io/poc-app-1234567891234/poc-frontend:v1
=> gcloud docker -- push gcr.io/poc-app-1234567891234/poc-backend:v1

You should now be able to see your images in the Container Registry

Creating your first Kubernetes Cluster

But first, what is a cluster?

Clusters are a collection of nodes and nodes are just computing power.

When you deploy an image you deploy it to a cluster and let Kubernetes worry about all the underlying management aspects such as which node it will run on.

Nodes that are similar in CPU, RAM, GPU and Zone are grouped together in Pools.

Why might you want multiple pools? Well, one common reason is that your application needs to be in multiple zones due to high availability requirements.

Lets make our First Cluster

First go to the Kubernetes Engine in the Google Cloud Console and click Create Cluster.

Next Select Your First Cluster and give it a name which you will use later.

If you take a look at the Node Pool that has been selected you will notice that it only contains 1 node of machine type small.

We only want 1 node for now because we are still testing this out and can scale later, however you might want to upgrade the machine type because you can’t do that later unless you recreate the cluster.

my-first-cluster.png

When you are happy with your pool configuration click Create

After some time your Cluster will have a green tick next to it.

cluster-ready

Next you want to run the following command to set up kubectl with the new cluster

=> gcloud container clusters get-credentials <CLUSTERNAME>

If you have multiple clusters already you might need to select this new context in kubectl

=> kubectl config current-context
=> kubectl config get-contexts
=> kubectl config use-context my-cluster-name

Create the Cloud SQL Database

For the backend in our cluster to work we need a database

  1. Go to the Cloud SQL page in the Google Console
  2. Click Create Instance and select MySQL, then select 2nd Generation and follow through the configuration options. (note: you might want to view the advanced options – specifically the machine type to save some cost)

create-db

Once your instance is created you might need to connect to it to initialise it. To do this

  1. Click on your new database instance on the Cloud SQL page
  2. Take note of the Public IP Address field in the Overview tab
  3. Click on the Connections Tab and Select Public IP as the connectivity type. Under Authorised Networks you want to put your current IP Address e.g. 190.60.241.198/32
  4. Connect to the database by your regular methods and run any sql scripts needed.

Next create a service account so you can access this database programmatically

  1. Go to the IAM Admin and click Create Service Account and enter a name e.g. database-service-account
  2. Select the role Cloud SQL Client
  3. Create a JSON Key and download it you will need this later.

create-key.png

Create Secrets for our Apps to use in Kubernetes

The app needs to know 3 secrets in order to run correctly.

  1. The Cloud SQL instance credentials

This is the JSON key you downloaded when creating the service account for the database

=> kubectl create secret generic cloudsql-instance-credentials --from-file=credentials.json=${INPUT.JSON}
  1. The Cloud SQL db login credentials
=> kubectl create secret generic cloudsql-db-credentials --from-literal=username=${MYSQL_USERNAME} --from-literal=password=${MYSQL_PASSWORD}
  1. A JWT Secret for the backend.
=> kubectl create secret generic jwt-secret --from-literal=secret=${JWT_SECRET}

These secrets are now visible under the Configuration section in the Kubernetes Engine

Deploy Front End to the Cluster

We now have all the building blocks required for our apps to run. Lets deploy to the cluster

When deploying to the cluster we create Pods. In most use cases you will have one container per pod. In the backend example we will see a use case to have more than one.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: frontend
  namespace: default
spec:
  template:
    metadata:
      labels:
        app: frontend
    spec:
      containers:
        - name: backend
          image: gcr.io/poc-app-1234567891234/poc-frontend:v1

Declared above is a yaml file called frontend.yaml which specifies how we will deploy the frontend container.

  • Take notice of the kind field it is stating that we are doing a deployment.
  • The metadata field is also interesting – we are giving the pod a name and a label which we will use later
  • In the spec we define what containers to run in this pod – its the frontend container that we uploaded at the start

We can deploy this pod by running the below command

=> kubectl create -f frontend.yaml

Deploy the Back End to the Cluster

In order to access our database from our backend we need to use the Cloud SQL Proxy to get secure access to our database without whitelisting.

Lets retrieve the database instance name so we can connect to the right database. You can retrieve this in the Overview Section of your CloudSQL Database it should look like this poc-app-1234567891234:australia-southeast1:poc-db

instance-name.png

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: backend
  namespace: default
spec:
  template:
    metadata:
      labels:
        app: backend
    spec:
      containers:
        - name: backend
          image: gcr.io/poc-app-1234567891234/poc-backend:v1
          env:
            - name: MYSQL_URL
              value: jdbc:mysql://127.0.0.1:3306/app
            - name: MYSQL_APP_USERNAME
              valueFrom:
                secretKeyRef:
                  name: cloudsql-db-credentials
                  key: username
            - name: MYSQL_APP_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: cloudsql-db-credentials
                  key: password
            - name: JWT_SECRET
              valueFrom:
                secretKeyRef:
                  name: jwt-secret
                  key: secret
        - name: cloudsql-proxy
          image: gcr.io/cloudsql-docker/gce-proxy:1.11
          command: ["/cloud_sql_proxy",
                      "-instances=poc-app-1234567891234:australia-southeast1:poc-db=tcp:3306",
                      "-credential_file=/secrets/cloudsql/credentials.json"]
          securityContext:
            runAsUser: 2  # non-root user
            allowPrivilegeEscalation: false
          volumeMounts:
            - name: cloudsql-instance-credentials
              mountPath: /secrets/cloudsql
              readOnly: true
      volumes:
      - name: cloudsql-instance-credentials
        secret:
          secretName: cloudsql-instance-credentials

Declared above is a yaml file called backend.yaml which specifies how we will deploy the backend container.

  • Look under the backend image at the env variables specified. You can see that we are passing in the secrets we created earlier to environment variables inside the container.
  • Notice how two containers are specified? app and cloudsql-proxy? This is a perfect example of when it makes sense to have multiple containers together as the proxy facilitates access to the database for the app. Apps in the same pod are visible to each other.
  • We need to use the JSON file we created earlier for the service user to pass to the cloud sql proxy so we use volumes to select the secret volume and volumeMounts to mount the credentials.json file to the /secrets/cloudsql directory on the cloudsql-proxy container

We can deploy this pod by running the below command

=> kubectl create -f backend.yaml

How do I see my app

If you go to the Workload section in the Kubernetes Engine you should see both the frontend and backend running at this time, however you will have no way to access them.

In order to access our application we will configure an Ingress Controller. There are many types of Ingress controllers but for this instance I used the Nginx Ingress controller.

You could also use a load balancer or node service to expose these services but an ingress controller will allow these services to appear under the same IP Address.

To install the nginx ingress controller I ran the following. However you should follow the installation guide.

=> kubectl create clusterrolebinding cluster-admin-binding --clusterrole cluster-admin --user $(gcloud config get-value account)
=> kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml
=> kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/cloud-generic.yaml

We can then configure our Ingress Service via the following yaml

kind: Service
apiVersion: v1
metadata:
  name: frontend-node-service
spec:
  type: NodePort
  selector:
    app: frontend
  ports:
  - port: 5000
    targetPort: 5000
    protocol: TCP
    name: http
---
kind: Service
apiVersion: v1
metadata:
  name: backend-node-service
spec:
  type: NodePort
  selector:
    app: backend
  ports:
  - port: 8080
    targetPort: 8080
    protocol: TCP
    name: http
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-service
  namespace: default
  annotations:
    kubernetes.io/ingress.class: nginx
    kubernetes.io/ingress.global-static-ip-name: "web-static-ip"
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - http:
      paths:
      - path: /*
        backend:
          serviceName: frontend-node-service
          servicePort: 5000
      - path: /api/*
        backend:
          serviceName: backend-node-service
          servicePort: 8080

Declared above is a yaml file called ingress.yaml which specifies how we will deploy the ingress controller.

  • The first two definitions in this file are NodePort Services which expose the apps named frontend on port 5000 and backend on port 8080.
    • port refers to what port the application is accessible on internally whereas targetPort refers to the port your application is exposing itself on. For simplicity I’ve kept them the same.
  • The final definition is the Ingress Service where we will serve all traffic on the /api/ path to the backend and all remaining traffic to the frontend.
  • The annotations configure our ingress controller.
    • kubernetes.io/ingress.class states that we are using the Nginx ingress controller
    • nginx.ingress.kubernetes.io/rewrite-target will rewrite requests e.g. /api/login will be forwarded to the backend service as /login
    • kubernetes.io/ingress.global-static-ip-name allows us to expose under a static ip address called web-static-ip.
      • To create a static ip run gcloud compute addresses create web-static-ip --global. You need to remember to delete this later because deleting the cluster doesn’t remove this.

NOTE: A health check is done on all of your Services in your Ingress Controller. They should all return 200 OK on the /route otherwise the controller will not work.

We can deploy this controller by running the below command

kubectl create -f ingress.yaml

Congratulations

Your app should now be visible via the ingress controller!

new.png

Go to the Discovery Page in the Kubernetes Engine and look at what IP Address your load balancer has been given!

Extra

You might also want to play around with the scaling capabilities of Kubernetes now that you are all set up.

Start by increasing your cluster size to 2, wait for the green tick to appear to let you know your cluster is ready.

You can then go to each of your pods and either scale manually or set up auto scaling.

After scaling you should see that the number of pods has increased for the scaled workloads.

 

ryan.siebert@shinesolutions.com
14 Comments
  • multimellon
    Posted at 06:39h, 19 December Reply

    Is it not more common to serve your frontend app via a CDN?

    • Ryan Siebert
      Posted at 16:47h, 17 May Reply

      You can still have a CDN with Kubernetes. I agree however that putting the static frontend files in a container is not needed. It was done more to have more than one container in a pod.

  • Felipe Crescencio De Oliveira
    Posted at 22:46h, 10 May Reply

    Hi, I based my solution in your tutorial, thank you very much.

    I got a problem with URLs to my backend, because nginx was not passing full URL.

    I commented about this case on Github, feel free to follow the link https://github.com/kubernetes/ingress-nginx/issues/1120#issuecomment-491258422.

    • Ryan Siebert
      Posted at 16:42h, 17 May Reply

      I’m glad the tutorial could be of help and thank you for sharing your solution to that problem.

    • Raven
      Posted at 02:34h, 04 August Reply

      This was also the case for me. Thanks for saving us time as well Felipe!

  • cgthayer
    Posted at 16:31h, 14 May Reply

    Thanks! The part about creating `cloudsql-instance-credentials` was super useful even though I’ve been using k8s on GKE for a couple of years. The docs from Google never actually details the connection between IAM service accounts and CloudSQL in a practical way.

    • Ryan Siebert
      Posted at 17:01h, 17 May Reply

      No worries, I found that part confusing. Hopefully I saved you some time.

  • Ibrahim Abou Khalil
    Posted at 21:01h, 27 April Reply

    First of all nice tutorial, but i have few questions:
    1. image: gcr.io/cloudsql-docker/gce-proxy:1.11 this line in backend.yaml, I don’t need to build the image and push it to container registry? Build by google itself?
    2.I’m trying to build a login page where when you submit the form it use ajax to request backend files and functions, how is this going on now? what I mean is how will this button know what file in the back end to use?

    • Ryan Siebert
      Posted at 09:15h, 12 May Reply

      Hi Ibrahim,

      Yes the gce-proxy is built by google specifically for the purpose of making it easy to connect to your cloud sql db. As for your second question it sounds like you need to build an api for your backend which can accept the signup and login requests. I’m unable to share the front and backend code for this project but if you let me know what language you are using for your backend I may be able to point you to a starting point on github.

  • Raven
    Posted at 05:14h, 03 August Reply

    Hi, this article is thorough and clear! Awesome job for it.
    Just a question, I am planning to add a functionality to my application (backend is with Nodejs) that uses websockets.

    You mentioned about Stateful containers being harder when scaling. Is this the case for what I am trying to do as well?

    Also, if I got it correctly – your setup is based on the cluster running with the services running on the same external ip address correct? What if I wanted like this:

    My Backend (API – Nodejs) at: api.mywebsite.com/
    My Frontend Code (Angular) at: mywebsite.com/

    Does that mean I’d have to create a separate project altogether? One for the backend and one for the frontend code?

    Apologies, still getting on advanced DevOps Topic like this. Only familiar with Docker atm.

    • Ryan Siebert
      Posted at 14:27h, 03 August Reply

      Hey Raven,

      I don’t have any first hand experience with using websockets with k8’s but I might link you to this article https://medium.com/k8scaleio/running-websocket-app-on-kubernetes-2e13eabb4c4f. As for your multiple domain issue – it may be worth trying to create an ingress controller per domain.

      • Raven
        Posted at 02:33h, 04 August

        Hi Ryan,

        This is great! I followed your extensive guide, made some tweaks and it fit right into my needs. Thank you for this!

        Moving on, I deployed it and now this is really a beginner’s question; how do I perform an update? Let’s say I made some changes to my code and now its also already in my local containers.

        Should I just push my container changes to the grc repos? Or should I rerun all the kubernetes yaml files?

  • Ryan Siebert
    Posted at 09:27h, 04 August Reply

    Hi Raven,

    If you made changes to your containers you will need to push them and then update your yaml file with the update image version.

    You should be able to run apply again as it tries to match the state you’ve defined see https://kubectl.docs.kubernetes.io/pages/app_management/apply.html

  • Nans
    Posted at 17:09h, 11 October Reply

    This blog is very useful. I followed this and so far I got no error. But I want to ask what if I want to associate the domain name to my ingress controller? Should I enroll first the domain name to Cloud DNS?

Leave a Reply

%d bloggers like this: