This post will walk through the steps on how we provisioned our production Kubernetes cluster @mobingi. Some of the bits here are already automated in our case but I will try to include as much details as I can.
Our goals would be the following:
- Provision a Kubernetes cluster on AWS using kops.
- The cluster will have two autoscaling groups: one for on-demand, one for spot instances.
- It’s going to be a gossip-based cluster.
- RBAC is enabled in the cluster.
There is no need to rewrite the otherwise excellent installation instructions written in the kops wiki. Basically, we can just follow the setup instructions there. The instructions in this post will be specific to our goals above.
We will skip the DNS configuration section as we are provisioning a gossip-based cluster, meaning, the cluster name will be something like
"<some-name>.k8s.local". We will be using
"mycluster.k8s.local" as our cluster name.
We will create the keypair in AWS under EC2 -> Key Pairs -> Create Key Pair. We need this keypair when we need to ssh to our cluster nodes. After saving the keypair somewhere, we will generate the public key using the following command:
Create the cluster
At this point, we should already have our environment variables set, mainly
KOPS_STATE_STORE. To create the cluster:
It will take some time before the cluster will be ready. To validate the cluster:
Notice that we only have one instance for our master. We can also opt to have a highly available master using these options, which is generally recommended for production clusters. Based on our experience though, this single master instance setup is good enough for development and/or staging clusters. There’s going to be downtime if master goes down in which case the duration will depend on how long AWS autoscaling group kicks in. During that window, k8s API won’t be accessible but the nodes will continue to work, including our deployed applications.
Spot instance autoscaling group
Once the cluster is ready, we will add another instance group for spot instances. The default instance group created in the previous command, named “nodes”, will be our on-demand group. To add:
You can then edit using the following contents (modify values as needed):
We can now update our cluster with the following commands:
We can now validate the cluster to see our changes:
When our cluster was created, kops also has automatically generated our config file for kubectl. To verify:
Setup cluster autoscaler
Clusters created using kops use autoscaling groups, but without scaling policies (at the time of writing). To enable dynamic scaling of our cluster, we will use cluster autoscaler. Before cluster autoscaler deployment, we need to setup some prerequisites.
First, we need to attach the following permissions to master and nodes IAM role. Go to IAM roles console and add an inline policy to the roles created by kops. Role names would be something like:
The latest installation instructions can be found here. The general idea is to choose the latest yaml file, update it with your own values, and apply the file using kubectl. The readme file also provides a script that does the download and edit for us. We will be using the following command line arguments for our autoscaler:
You should update the last two line with your own autoscaling group min/max values. Finally, we deploy our autoscaler with:
That’s it. You may also want to install these addons if you like.
SSH to a node---
If you have any questions or feedback, please reach out @flowerinthenyt.
This work is licensed under a Creative Commons Attribution 4.0 International License.