CI/CD Jenkins Integration with Kubernetes , Ansible , Terraform

saurabh kharkate
8 min readOct 4, 2021

Hello everyone, here I came up with another article where I explain and show the demo to deploy a web app over kubernetes cluster with some devops integration tools.

so what we have to do.

  • Setup kubernetes cluster over aws cloud using Terrafrom and Ansible (Using Kubeadm).
  • Connect kubernetes cluster with Jenkins
  • so that Whenever a developer pushes code to github, Jenkins job should trigger.
  • It will build the docker image,push it to the docker repository and
    deploy it to kubernetes cluster.
  • The application should update without downtime.

so lets start with first step

Creating Kubernetes Cluster over cloud.

for provisioning the instances over cloud here I am using IAAC tool called Terraform.

Here I already install and configure my system with terraform, Here below i have created three workspace and the code of terraform will launch the resources in their workspace with respect to their cloud.

#terraform workspace new workspace_name   --> creating new workspace
#terraform workspace list --> will show all the workspace
#terraform workspace show --> will show the current workspace

By default terraform create a default workspace and the * represents the current workspace.

For doing provisioning using terraform over aws cloud I have to setup my provisioner for aws. so I have to create a profile using below command here I have provide my aws access_key and secret_key_id of IAM user of my aws account.

# aws configure --profile <user_name>

Now I can go forward for provisioning code of terraform. so before that we have to know what we have to provisioning in aws. so below I have list what I have to launch using terraform code.

1. VPC 
2. Subnets
3. Route table
4. Subnet association with route table
5. Internet Gateway
6. security group
7. 3 ec2 instances

VPC

Amazon Virtual Private Cloud (Amazon VPC) enables you to launch AWS resources into a virtual network that you’ve defined. This virtual network closely resembles a traditional network that you’d operate in your own data center, with the benefits of using the scalable infrastructure of AWS.

Creating Subnets

A subnet is a range of IP addresses in your VPC. You can launch AWS resources, such as EC2 instances, into a specific subnet. When you create a subnet, you specify the IPv4 CIDR block for the subnet, which is a subset of the VPC CIDR block.

Creating Route Table

A route table contains a set of rules, called routes, that are used to determine where network traffic from your subnet or gateway is directed. Each subnet in your VPC must be associated with a route table, which controls the routing for the subnet (subnet route table). You can explicitly associate a subnet with a particular route table. Otherwise, the subnet is implicitly associated with the main route table. A subnet can only be associated with one route table at a time, but you can associate multiple subnets with the same subnet route table.

Subnet association with route table

Here I need to connect the route table created for internet gateways to the respective subnets inside the vpc.

Creating Security Group

A security group is an AWS firewall solution that performs one primary function: to filter incoming and outgoing traffic from an EC2 instance. It accomplishes this filtering function at the TCP and IP layers, via their respective ports, and source/destination IP addresses.

Creating EC2 instances

How AWS EC2 instance works?

An Amazon EC2 instance is a virtual computing environment that you create and configure using Amazon Elastic Compute Cloud. Amazon EC2 provides scalable computing capacity in the AWS Cloud. You can use Amazon EC2 to launch as many or as few virtual servers as you need for your Code Deploy deployments.

Here I create code for three instances one Kubernetes_Master , and Two Kubenetes_Slaves Node.

As Configure both Terraform and Ansible in my same system so, here I create one null resource which do local-exec and run my ansible playbook which will configure my kubernetes cluster on the instances that we created above code.

  • Now run our Terraform code.
  • as you see there is no instance present.. in account

Now, Here we have configured with the Kubenetes cluster now we have to create a pipeline that will triggers our code from github and deploy it on our kubernetes cluster and build that image and push to our dockerhub repo.

Now , Start with the Jenkins part Here I have another system in which jenkins and docker both are already configure now lets create a pipeline.

Create Jenkins Pipeline

The first step for you would be to create a pipeline.

  1. Goto : Jenkins -> New Items
  2. Enter an item name : my_project
  3. Select Pipeline
  4. Click Ok

Clone the Git Repo

The first principle of the CI/CD pipeline is to clone/checkout the source code, using the same principle we are going to clone the GIT Repo inside Jenkins

stage("Git Clone"){git credentialsId: 'GIT_HUB_CREDENTIALS', url: 'https://github.com/SaurabhSK123/jenkins-k8s-ci-cd.git'
}

Jenkins store git credentials

As you know we cannot store plain text password inside jenkins scripts, so we need to store it somewhere securely.

Jenkins Manage Credential provides very elegant way to store GitHub Username and Password.

Goto : Jenkins -> Manage Jenkins -> Manage Credentials

After that Goto : Stores scoped to Jenkins -> global

Then select Username with Password

And then input you GitHub Username and Password. But always remember the ID

Note — Keep the ID somewhere store so that you remember — GIT_HUB_CREDENTIALS

8.4 Build the Application

Next step would be to build the java application if you have.

stage('maven Build') {       sh './mvn build'    }

Build Docker image and tag it

After successful Gradle Build we are going to build Docker Image and after that I am going to tag it with the name webapp

stage("Docker build"){
sh 'docker version'
sh 'docker build -t webk8sapp.'
sh 'docker image list'
sh 'docker tag docker tag webapp saurabh05sk/myrepo:webk8sapp'
}

Jenkins store DockerHub credentials

For storing DockerHub Credentials you need to GOTO:

Jenkins -> Manage Jenkins -> Manage Credentials -> Stored scoped to jenkins -> global -> Add Credentials

From the Kind dropdown please select Secret text

  1. Secret — Type in the DockerHub Password
  2. ID — DOCKER_HUB_PASSWORD
  3. Description — Docker Hub password

Docker Login via CLI

Since I am working inside Jenkins so every step I perform I need to write pipeline script. Now after building and tagging the Docker Image we need to push it to the DockerHub. But before you push to DockerHub you need to authenticate yourself via CLI(command line interface) using docker login

So here is the pipeline step for Docker Login

stage("Docker Login"){
withCredentials([string(credentialsId: 'DOCKER_HUB_PASSWORD', variable: 'PASSWORD')]) {
sh 'docker login -u saurabh05sk -p $PASSWORD'
}
}

$DOCKER_HUB_PASSWORD - Since I cann’t disclose my DockerHub password, so I stored my DockerHub Password into Jenkins Manage Jenkins and assigned the ID $DOCKER_HUB_PASSWORD

8.8 Push Docker Image into DockerHub

After successful Docker login now we need to push the image to DockerHub

stage("Push Image to Docker Hub"){
sh 'docker push docker push saurabh05sk/myrepo:webk8sapp'
}

SSH Into k8smaster server

If you remember we have installed SSH Pipeline Steps in step no — 5, now we are going to use that plugin to SSH into k8smaster server

stage("SSH Into k8s Server") {
def remote = [:]
remote.name = 'k8s_master'
remote.host = '100.0.0.2'
remote.user = 'k8s-user'
remote.password = 'password'
remote.allowAnyHosts = true
}

Copy k8swebapp.yml to k8s_master server

After successful login copy k8swebapp.yml into k8smaster server

stage('Put k8swebapp.yml onto k8s_master') {
sshPut remote: remote, from: 'k8swebapp.yml', into: '.'
}

Create kubernetes deployment and service

Apply the k8swebapp.yml which will eventually -

  1. Create deployment with name — k8swebapp
  2. Expose service on NodePort
stage('Deploy spring boot') {
sshCommand remote: remote, command: "kubectl apply -f k8swebapp.yml"
}

So here is the final complete pipeline script for my CI/CD Jenkins kubernetes pipeline

node {    stage("Git Clone"){git credentialsId: 'GIT_CREDENTIALS', url: 'https://github.com/SaurabhSK123/jenkins-k8s-ci-cd.git'
}
stage('maven Build') { sh './mvn build' } stage("Docker build"){
sh 'docker version'
sh 'docker build -t webk8sapp.'
sh 'docker image list'
sh 'docker tag webapp saurabh05sk/myrepo:webk8sapp'
}
withCredentials([string(credentialsId: 'DOCKER_HUB_PASSWORD', variable: 'PASSWORD')]) {
sh 'docker login -u saurabh05sk -p $PASSWORD'
}
stage("Push Image to Docker Hub"){
sh 'docker push saurabh05sk/myrepo:webk8sapp'
}
stage("SSH Into k8s Server") {
def remote = [:]
remote.name = 'K8S_Master'
remote.host = 'ip-address'
remote.user = 'k8s-user'
remote.password = 'password'
remote.allowAnyHosts = true
stage('Put k8swebapp.yml onto k8smaster') {
sshPut remote: remote, from: 'k8swebapp.yml', into: '.'
}
stage('Deploy pod') {
sshCommand remote: remote, command: "kubectl apply -f k8swebapp.yml"
}
}
}

Conclusion

  1. First thing which I did is — setup the kubernetes cluster
  2. Install Jenkins on another server
  3. Install plugin ‘SSH Pipeline Step’ for jenkins
  4. Install Docker along with Jenkins
  5. Setup user group for Current User and Jenkins
  6. I created Jenkins Pipeline script for continuous Deployment

Here is github code

Thanks for reading..🙏😃

Keep Sharing !!! Keep Learning !!!

--

--