A custom Kubernetes Cluster on GCP in 7 minutes with Terraform and Ansible

Pierre-Yves Aillet
Zenika
Published in
5 min readJan 21, 2019

--

In this blog, I will share some experiences I had working with Kubernetes. The application I developed at my client’s workplace is deployed on Kubernetes. However, it is not the latest version and as a curious person by nature, I wanted to keep myself updated and that is how I got interested in Kelsey Hightower’s Kubernetes The Hard Way.

If you are as curious as I am about Kubernetes, I encourage you to look at this project. This will help you to:

1. Understand Kubernetes inner workings and the role of the different components

2. Install a Kubernetes cluster

3. Automate all steps

Kubernetes The Hard Way

This project allows you to create a Kubernetes cluster on Google Compute Engine by rolling out each step of the deployment:

  1. Provisioning of VMs

2. Creating certificates

3. Network configuration

4. Deploying and configuring components on controllers

5. Deploying and configuring worker components

Once this tutorial was complete, it was not still enough for me. I wanted to automate the project.

The benefits of doing it would be multi-fold, apart from being able to successfully automate the project, it would provide me an opportunity to use Terraform and Ansible.

The tools

Terraform is a project of HashiCorp. It allows a user to automate the creation of resources on the Cloud by using descriptors in Infra as Code. In this case, it will be used to create the virtual machines and configure the network on Google Cloud Engine.

Ansible — We have to deploy the necessary components on each server according to their role (controller or worker).

Incidentally, I also discovered the cfssl utility which greatly simplifies the management of certificates by using json configuration files.

For example, when we have to generate the CA certificate and its private key, then we would use the following configuration file:

And the following command:

cfssl gencert -initca ca-csr.json | cfssljson -bare ca

This utility will be used to set up the PKI used in this project.

To simplify their use in the project, all these tools will be integrated directly into a docker image.

How to use the project?

The whole project is available on Github and we will briefly see how to use it.

Do not hesitate to try it!

Let’s start with cloning the repository:

git clone https://github.com/Zenika/k8s-on-gce.gitcd k8s-on-gce

Then configure the Google Cloud credentials. Let’s go to this page and follow the service account creation instructions and the associated key.

Download the adc.json file and copy it to the app directory.

Modify the profile file to define the project, region, and Google Cloud zone to use:

export GCLOUD_PROJECT=<VOTRE_PROJET>
export GCLOUD_REGION= # Example europe-west1
export GCLOUD_ZONE= # Example europe-west1-b

Let’s run the command ./in.sh which will launch a docker container with the necessary tools.

In this container, we can get the list of regions and zones available with the commands:

gcloud compute zones list
gcloud compute regions list

We will then issue the command ./create.sh to roll out the steps necessary to create the cluster.

The numbering of the steps corresponds to that of the original project.

Steps 1 and 2 for prerequisites and installation of tools are not detailed here because they are taken into account in the Docker image we use.

First, we create a ssh key pair to access our VMs (be careful since the private key is created without passphrase).

Next we will use Terraform to create all the necessary resources:

  • Virtual machines
  • Network configuration
  • Firewall rules
  • Load balancer to access the cluster

The description of these resources is present in the file 03-provisioning / kube.tf

The terraform init command initializes the Terraform environment and will download the plugins related to the provider (Google Cloud Platform in our case)

The following command starts the creation of the resources with our parameters:

terraform apply -auto-approve -var “gce_zone=${GCLOUD_ZONE}” 03-provisioning

There are three stages of preparation:

  • 04-certs : The creation of the certificates used to secure and authenticate the accesses between the different components and from outside the cluster
  • 05-kubeconfig : The generation of the configuration used by kubelet and kube-proxy on each of the workers
  • 06-encryption : The generation of a key used to encrypt the Kubernetes Secrets of the cluster

And an added step for configuring Ansible 00-ansible to generate the inventory of the machines on which the deployment will take place.

We can then go to step 07-etcd and launch our first Ansible playbook that will deploy etcd on each controller node of the cluster.

During step 08-kube-controller we finish the deployment of the controller components (kube-apiserver, kube-controller-manager, kube-scheduler), always via an Ansible playbook.

It is now the turn of the workers deployment playbook (kubelet, kube-proxy, cry-containerd): 09-kubelet .

We then need to configure our own 10-kubectl client, so that it points to our cluster and uses the right certificates.

The pods deployed on the different nodes will receive an IP address in a network linked to the node on which they will be created.

Step 11-network-conf allows us to configure the routes used by the pods to communicate.

Finally, we are deploying 12-kube-dns which will allow access to services directly by their name from within the cluster.

Our cluster is now accessible via the kubectl client!

For example, if we want to see the state of the worker nodes of the cluster, just run the command:

kubectl get nodes

The 13-addons directory contains the necessary elements to deploy add-ons on our cluster:

  • A dashboard that allows you to view the state of the cluster and the resources it hosts
  • The Ingress Traefik which is used to expose the services outside the cluster
Example of a dashboard with the whoami deployed application

The 14-example directory contains an example of application deployment on the cluster using Ingress Traefik.

⚠️ When you’re done, it will be necessary to clean all created resources by launching the command ./cleanup.sh

Conclusion

While there is a much simpler way to use Kubernetes on GCP, describing all of the necessary steps to create a Kubernetes cluster helps you to better understand the role of each component and how the magic works 😉

It can also help you customize Kubernetes with your own versions of the tools without being constrained by those proposed by any particular Kubernetes distributions.

--

--