Creating a Kubernetes cluster in AWS using Terraform 馃殌

Written by Sivaranjan

3 min read

Creating a Kubernetes cluster in AWS using Terraform 馃殌

It involves several steps. Below is a high-level overview and a step-by-step guide to help you achieve this.

High-Level Overview:

  1. Set up Terraform: Install Terraform and configure AWS credentials.

  2. Define Infrastructure: Write Terraform configurations to define the AWS resources required for the Kubernetes cluster.

  3. Provision Infrastructure: Use Terraform to create the AWS resources.

  4. Install Kubernetes: Set up Kubernetes on the provisioned infrastructure.

Step-by-Step Guide:

1. Install Terraform

Download and install Terraform from the official website.

2. Configure AWS Credentials

Ensure you have your AWS credentials configured. You can set them up in ~/.aws/credentials file or export them as environment variables.

export AWS_ACCESS_KEY_ID="your-access-key-id"
export AWS_SECRET_ACCESS_KEY="your-secret-access-key"

3. Write Terraform Configuration

  1. Create a Directory for Your Terraform Files:

     mkdir terraform-k8s-cluster
     cd terraform-k8s-cluster
    
  2. Create a Terraform Configuration File (main.tf):

     provider "aws" {
       region = "us-west-2"
     }
    
     resource "aws_vpc" "k8s_vpc" {
       cidr_block = "10.0.0.0/16"
     }
    
     resource "aws_subnet" "k8s_subnet" {
       vpc_id            = aws_vpc.k8s_vpc.id
       cidr_block        = "10.0.1.0/24"
       availability_zone = "us-west-2a"
     }
    
     resource "aws_internet_gateway" "k8s_igw" {
       vpc_id = aws_vpc.k8s_vpc.id
     }
    
     resource "aws_route_table" "k8s_route_table" {
       vpc_id = aws_vpc.k8s_vpc.id
    
       route {
         cidr_block = "0.0.0.0/0"
         gateway_id = aws_internet_gateway.k8s_igw.id
       }
     }
    
     resource "aws_route_table_association" "k8s_route_table_association" {
       subnet_id      = aws_subnet.k8s_subnet.id
       route_table_id = aws_route_table.k8s_route_table.id
     }
    
     resource "aws_security_group" "k8s_sg" {
       vpc_id = aws_vpc.k8s_vpc.id
    
       ingress {
         from_port   = 22
         to_port     = 22
         protocol    = "tcp"
         cidr_blocks = ["0.0.0.0/0"]
       }
    
       egress {
         from_port   = 0
         to_port     = 0
         protocol    = "-1"
         cidr_blocks = ["0.0.0.0/0"]
       }
     }
    
     resource "aws_instance" "k8s_master" {
       ami                    = "ami-0c55b159cbfafe1f0" # Replace with your preferred AMI ID
       instance_type          = "t2.micro"
       subnet_id              = aws_subnet.k8s_subnet.id
       security_groups        = [aws_security_group.k8s_sg.name]
       key_name               = "your-key-name" # Replace with your key pair name
    
       tags = {
         Name = "k8s-master"
       }
     }
    
     resource "aws_instance" "k8s_worker" {
       ami                    = "ami-0c55b159cbfafe1f0" # Replace with your preferred AMI ID
       instance_type          = "t2.micro"
       subnet_id              = aws_subnet.k8s_subnet.id
       security_groups        = [aws_security_group.k8s_sg.name]
       key_name               = "your-key-name" # Replace with your key pair name
    
       tags = {
         Name = "k8s-worker"
       }
     }
    
     output "master_public_ip" {
       value = aws_instance.k8s_master.public_ip
     }
    
     output "worker_public_ip" {
       value = aws_instance.k8s_worker.public_ip
     }
    

4. Initialize and Apply Terraform Configuration

terraform init
terraform apply

5. Set Up Kubernetes on EC2 Instances

After provisioning the infrastructure, you need to SSH into the EC2 instances and set up Kubernetes. Below are simplified steps for manual installation. For a production setup, consider using a tool like Kops or EKS.

  1. SSH into Master Node:

     ssh -i your-key.pem ec2-user@<master_public_ip>
    
  2. Install Docker:

     sudo yum update -y
     sudo amazon-linux-extras install docker
     sudo service docker start
     sudo usermod -a -G docker ec2-user
    
  3. Install Kubernetes:

     sudo curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl"
     sudo chmod +x ./kubectl
     sudo mv ./kubectl /usr/local/bin/kubectl
    
  4. Install Minikube (optional):

     curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
     sudo install minikube-linux-amd64 /usr/local/bin/minikube
    
  5. Repeat Steps for Worker Node.

6. Initialize Kubernetes Cluster

Initialize the Kubernetes cluster on the master node and join the worker node.

  1. On Master Node:

     sudo kubeadm init --pod-network-cidr=10.244.0.0/16
    
  2. Set Up kubeconfig:

     mkdir -p $HOME/.kube
     sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
     sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
  3. Install Pod Network Add-on:

     kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
    
  4. Join Worker Node: On the worker node, use the join command provided by the kubeadm init output on the master node.

7. Verify Cluster

kubectl get nodes

Conclusion

You now have a Kubernetes cluster running on AWS, provisioned and configured using Terraform. For production environments, consider using managed Kubernetes services like EKS or additional tools like Kops for easier management and scalability.