Ansible Role to Automate Kubernetes Multi Node Cluster over AWS Cloud.

saurabh kharkate
5 min readMay 5, 2021

Hey !!! Here come with another automation article, Here we going to find out how to automate the Kubernetes multi node cluster using Ansible Role.

Steps to do in this demo:

As we are going to Launch ec2 instances as one for master and two for slave node in k8s cluster , so we have to create the ansible role for them , then we have to create the role for configuring our instances as K8S master and slaves nodes.

Steps to configure our ansible to connect with our aws cloud.

  • Configuring dynamic inventory in local system.
  • Configuring Privilege Escalation for ansible.

Steps to Create the roles for launching and configuring K8S cluster.

  • Creating role launch_ec2 for launching ec2 instances on cloud.
  • Creating tasks for launch_ec2 role.
  • Creating roles for k8s master and slaves node.
  • Creating tasks for k8s master and slaves roles respectively .
  • Combining roles in one main playbook.
  • Running main playbook.
  • Check cluster created or not.

let start with practical step by step,

STEP 1:

Configuring dynamic inventory

  • Create the Inventory directory.
  • Download ec2.yml and ec2.ini from ansible official dynamic inventory GitHub link in /Inventory folder. both the file should be in same folder.
# wget https://raw.githubusercontent.com/ansible/ansible/stable-2.9/contrib/inventory/ec2.py# wget https://raw.githubusercontent.com/ansible/ansible/stable-2.9/contrib/inventory/ec2.ini
  • Change ec2.py file in executable mode.
  • If we are using python3 then, open the ec2.py file and change env python to python3 in the first line as the python code is written in python2 but we’ll be using python3. So, we need to change it as “#!/usr/bin/python3”. and comment line no 172 in it.
# chmod +x ec2.py
  • Open the ec2.ini file and put your aws access key and secret key in credentials part.
  • Set environment variable for authentication
$ export AWS_RGION='YOUR-AWS-REGION-NAME-HERE'
$ export AWS_ACCESS_KEY_ID=XXXXXXXXXXXXXXXXXX
$ export AWS_SECRET_ACCESS_KEY=XXXXXXXXXXXXXXXXXXXXXXX
  • Installing boto and boto3.
# pip install boto
# pip install boto3

Step 2:

Configure Privilege in ansible.cfg file

Most of the EC2 instances allow us to login as an “ec2-user” user, this is why we have to mention the remote_user as “ec2-user”.

  • EC2 instances allow key-based authentication, hence, we must mention the path of the private key.
  • Important part is privilege. “root” powers are required if we want to configure anything in the instance. But “ec2-user” user is a general user with limited powers. Privilege Escalation is used to give “Sudo” powers to a general user.
  • visit to the directory where your private key located and change the mode of private key file.
# chmod 400 newsk.pem

Step 3:

Create Role for launching ec2 instances on aws.

# ansible-galaxy role init <role_name>
  • visit to tasks folder and write playbook to launch ec2 instance in main.yml file.

tasks for lauch_ec2 role

  • vars file for launch_ec2 role

Step 4:

Create a Role for Kubernetes Master and Kubernetes Slave.

# ansible-galaxy init <role_name>

STEP 5:

Steps for the configuration of master in Kubernetes Cluster:

  1. Install docker (As we are using Amazon Linux 2 image so we don’t need to configure repo for docker).
  2. Start docker.
  3. enable docker.
  4. Configure Kubernetes Repo.
  5. Install Kubeadm (it will automatically install kubectl and kubelet).
  6. enable kubelet.
  7. pull docker images using kubeadm.
  8. change driver of docker from cgroupfs to systemd.
  9. restart docker.
  10. Installing iproute-tc.
  11. Setting bridge-nf-call-iptables to 1.
  12. Initializing Master.
  13. Creating .kube directory.
  14. Copying /etc/kubernetes/admin.conf $HOME/.kube/config.
  15. changing owner permission of $HOME/.kube/config.
  16. Creating Flannel.
  17. Generating Token.

Steps for the configuration of Slaves in Kubernetes Cluster:

  1. Install docker (As we are using Amazon Linux 2 image so we don’t need to configure repo for docker).
  2. Start docker.
  3. enable docker.
  4. Configure Kubernetes Repo.
  5. Install Kubeadm (it will automatically install kubectl and kubelet).
  6. enable kubelet.
  7. pull docker images using kubeadm.
  8. change driver of docker from cgroupfs to systemd.
  9. restart docker.
  10. Installing iproute-tc.
  11. Setting bridge-nf-call-iptables to 1.
  12. Join the Slave with the master.
  • For Better Understanding steps in playbooks visit below Github Repo 👇👇
  • tasks for k8s_master role

k8s_slave role files

STEP 6:

Now combine all the roles in one main playbook.

  • main.yml

Step 7:

Run the playbook

Step 8:

Now Check the AWS ec2 instances and check the nodes in master nodes.

# kubectl get nodes
# kubectl ger pods -n kube-system
  • Here we can see that our nodes get connected to master node and our Kubernetes Cluster is ready.

Github Repo 👇👇

!!! Task completed Successfully !!!! 😃😃😉

☘☘Keep Sharing!!! , Keep Learning!!! ☘☘

🙏🙏Thanks for Reading 🙏🙏

--

--