Getting Started with Azure Kubernetes Service

Getting Started with Azure Kubernetes Service

We were recently tasked with delivering a proof of concept for a large retailer to help them easily scale their Virtual Machines (VMs) and Docker containers in the Azure cloud. This meant we had to familiarise ourselves with Azure’s Kubernetes Service and we thought it would be a good opportunity to share our findings.

Azure Kubernetes Service (AKS) is Microsoft’s tool for managing Kubernetes clusters in their cloud. In this guide we’re going to walk through some initial configuration for AKS, deploy a Spring Boot docker container, take a look at the Kubernetes dashboard and scale both the application container instances and the VMs these can be deployed on. It might help to have a basic understanding of Kubernetes concepts before you begin

The first thing to do is grab yourself a free*** Azure account and familiarise yourself with the Portal.

I’m going to be showing most things using the Command Line Interface (CLI) with commands that run on my MacBook, but pretty much everything can be done through the Portal and the Kubernetes Dashboard if you prefer to click on things.


We’re going to need the Azure CLI and to log in to our account.

brew update && brew install azure-cli
az login

Since we’re installing stuff let’s also grab the Kubernetes CLI.

az aks install-cli

Now let’s create a new Resource Group (or use an existing one) to put all of our shiny new resources under.

az group create --name myResourceGroup --location australiaeast

Pro Tip: When you’ve had enough fun delete the resource group which will remove everything you’ve created under it to save some money.

az group delete -n myResourceGroup --yes --no-wait

Time to create our cluster! Let’s just use one Node in the pool for now, we can scale it later.

az aks create --resource-group myResourceGroup --name myAKSCluster --node-count 1 --node-vm-size Standard_D2_v2 --enable-addons monitoring --generate-ssh-keys

This might take a few minutes to provision so it’s a good time to brew a tea xor coffee.
Okay, so far so good. Now give Kubernetes access to the cluster we just made. This will automatically populate the configuration file at ~/.kube/config.This creates one Node (VM) of size D2 which we can deploy our container to. Microsoft has a table of all preconfigured VM sizes.

az aks get-credentials --resource-group myResourceGroup --name myAKSCluster

We can also validate the state of our Node(s) in the cluster. They will take a minute or so to provision.

kubectl get nodes --watch

Before we can view the pretty dashboard, we need an account with the correct permissions. You can hook this up to your Active Directory but the quickest way to get it going is to just do the following:

kubectl create clusterrolebinding kubernetes-dashboard --clusterrole=cluster-admin --serviceaccount=kube-system:kubernetes-dashboard

Note: You should definitely read more about this part of the dashboard in particular if you plan on using AKS in production.


Managing things and dashboards go so well together so let’s fire one up.

az aks browse --resource-group myResourceGroup --name myAKSCluster
open http://localhost:8001/

dash board.png

Mmmmm…, so dashboardy!


Okay, now the fun part. Let’s deploy a simple Spring Boot “Hello World” docker container I baked earlier. I’ll link the source code for those interested, but we’re just going to deploy the container directly from the DockerHub registry.

We use YAML to define our Kubernetes deployments. Here is the definition we’ll use for the demo. Save this file as hello-world.yml and I’ll explain the important bits.

apiVersion: apps/v1beta1
kind: Deployment
  name: hello-world-service
  replicas: 2
        app: hello-world-service
      - name: hello-world-service
        image: kimb88/hello-world-spring-boot:latest
        imagePullPolicy: Always
            scheme: HTTP
            path: /health
            port: 8080
          timeoutSeconds: 20
          initialDelaySeconds: 15
          periodSeconds: 10
          successThreshold: 3
          failureThreshold: 5
        - containerPort: 8080
          value: dev
              command: ["sleep", "15"]
apiVersion: v1
kind: Service
  name: hello-world-load-balancer
  type: LoadBalancer
  - port: 80
    targetPort: 8080
    app: hello-world-service

The deployment will start two application container instances listening on port 8080 and assign them to Pods. The Docker Image is fetched from DockerHub but we could get them from an Azure Container Registry if we preferred.This defines two components, deployable containers named hello-world-service of which there are two replicas (instances) and a service of type LoadBalancer.

I’ve also defined a simple environment variable and a readinessProbe. The probe is just a way of letting Kubernetes know that our Pods are ready to serve requests. In this case we poll Spring Boot’s health actuator and wait for three successful responses.

The load balancer is the simplest way of exposing our Pods, as Pods are not usually accessible directly from the outside world.

Issue the following command to create our resources and start the deployment.

kubectl apply -f hello-world.yml

If you switch back to the dashboard you should see the two application Pods and the load balancer starting to deploy.


We can also check the status of the load balancer in the command line. It will show its External IP when ready. Copy the External IP for future steps.

kubectl get service hello-world-load-balancer --watch


Now that the load balancer is ready let’s hit it with curl and see what happens. Replace the IP address in the command below with your load balancer’s External IP.

for i in {1..5}; do echo $(curl -s; done


Nice! We can see the load balancer sending us to one of the two Pods.

We’ve now got a working load balancer in front of our two Docker containers. But what if our traffic started growing and we needed another container instance? Time to scale the pods.

kubectl scale --replicas=3 deployment/hello-world-service

Once the new pod is marked as ready it will be added to the load balancer pool. When this is done we can test it again to see our new pod processing requests.

for i in {1..10}; do echo $(curl -s; done


We can see our request being balanced to three Pods now!

Okay, that was super easy but we can’t keep adding more containers since we only have a single Node in our AKS cluster, soon we’ll run out of resources on a single VM. Let’s scale the Nodes.

az aks scale --name myAKSCluster --resource-group myResourceGroup --node-count 2

This will begin to provision another VM for us and add it to our cluster. It will take a little longer to spin up but when it’s ready we should be able to see a second Node in the dashboard.


Okay, now let’s really ramp it up. Let’s double the number of container instances.

kubectl scale --replicas=6 deployment/hello-world-service

If we head back to the dashboard we can see that the three new pods have been deployed. If you look closer they should be deployed to our newer Node as it has more resources available.


Let’s hit it with curl again to see them in action.

for i in {1..20}; do echo $(curl -s; done


Once again that was pretty simple.

Now that you’ve seen the basics it would great to explore auto-scaling for Pods and Nodes.

…But that’s beyond the scope of this blog post 😉

That concludes the introduction to AKS. We’ve shown how quick and easy it is to create an AKS cluster, deploy a Spring Boot docker container and manually scale out both the number of nodes and application pods. We’ve only really scratched the surface of AKS but already you can see how important this is going to be, especially for large enterprises.

Don’t forget to delete your resource group before you log out.
No Comments

Leave a Reply