Setting Up a Solid Kubernetes Cluster

Kubernetes has become the go-to solution for orchestrating containerized applications, providing scalability, resilience, and automation. Whether you’re deploying a small development cluster or a large-scale production environment, setting up Kubernetes correctly is essential for performance and security.

Prerequisites for Setting Up a Kubernetes Cluster

Before you start setting up Kubernetes, it is essential to prepare your infrastructure to ensure a smooth installation and stable performance. Below are the key prerequisites explained in detail.

1. Linux-Based Servers (Nodes)

A Kubernetes cluster consists of multiple nodes, which are individual machines that run the cluster’s workloads. These nodes can be physical servers or virtual machines (VMs), depending on your deployment choice.

Minimum Hardware Requirements

For a functional Kubernetes cluster, each node should meet these minimum specifications:

  • CPU: At least 2 CPU cores per node. More cores improve cluster performance, especially under heavy workloads.
  • RAM: Minimum 4 GB RAM per node. Production environments typically require at least 8 GB or more per node.
  • Storage: At least 20 GB of free disk space to store Kubernetes components, logs, and container images. SSDs are recommended for better performance.

Recommended Cluster Configuration

A basic Kubernetes cluster typically consists of:

  • 1 Master Node: Manages the cluster and controls scheduling, networking, and scaling.
  • 2 or More Worker Nodes: Run containerized applications and handle workloads.

For high availability (HA), it is recommended to have at least 3 master nodes and 3 or more worker nodes to prevent downtime if a node fails.

2. Container Runtime (Docker, Containerd, or CRI-O)

Kubernetes requires a container runtime to run and manage containers efficiently. The most commonly used runtimes include:

  • Docker: The most popular container runtime, widely supported but being phased out in favor of containerd.
  • Containerd: A lightweight and Kubernetes-native runtime, now the preferred choice.
  • CRI-O: A minimal, Kubernetes-focused runtime that integrates directly with the Kubernetes Container Runtime Interface (CRI).
Read related post  Developing an Azure DevOps Matrix Strategy

To install containerd on a Linux system, use:

sudo apt update && sudo apt install -y containerd
sudo systemctl enable –now containerd

If you choose Docker, ensure it is configured correctly to work with Kubernetes:

sudo apt install -y docker.io
sudo systemctl enable –now docker

3. Compatible Operating System

Kubernetes supports various Linux distributions, but some are better optimized for performance and security. Recommended operating systems include:

  • Ubuntu 22.04 LTS: Most commonly used for Kubernetes deployments due to its stability and strong community support.
  • Debian 11: A lightweight, stable choice for running Kubernetes clusters.
  • CentOS 8 / Rocky Linux 8: Popular for enterprise Kubernetes setups, though CentOS 8 is now discontinued in favor of CentOS Stream.
  • Red Hat Enterprise Linux (RHEL) 8+: Used in production environments, but requires a subscription for full support.

Key OS Configurations

After installing the OS, perform the following configurations:

  • Disable Swap (required for Kubernetes to function properly):

sudo swapoff -a
sudo sed -i ‘/swap/d’ /etc/fstab

Enable Kernel Modules for Kubernetes:

sudo modprobe overlay
sudo modprobe br_netfilter
echo “net.bridge.bridge-nf-call-iptables = 1” | sudo tee -a /etc/sysctl.conf
sudo sysctl –system

4. Stable Network Connection

A Kubernetes cluster relies on inter-node communication, meaning all nodes must be able to communicate over the network. Consider the following:

  • Use Private IPs: Assign static private IP addresses to each node within the same subnet.

  • Open Required Ports: Ensure that the following ports are open on all nodes:

    ComponentPortDescription
    Kubernetes API Server6443Controls communication between nodes and kubectl
    etcd (Cluster Database)2379-2380Stores cluster data
    Kubelet10250Allows API communication with worker nodes
    NodePort Services30000-32767Exposes services outside the cluster
  • DNS and Firewall Configuration: Ensure nodes can resolve internal and external domain names, and adjust firewall rules to allow Kubernetes traffic.

To check connectivity between nodes, use:

ping <node-ip>

or

nc -zv <node-ip> 6443

5. Sudo or Root Access

Since Kubernetes installation involves modifying system settings, installing packages, and configuring network rules, administrator privileges are required.

  • If using a non-root user, ensure it has sudo privileges:

sudo usermod -aG sudo <your-username>

Switch to the root user before installing Kubernetes:

sudo -i

Before setting up Kubernetes, ensure you have the right infrastructure, a supported container runtime, a compatible OS, a stable network, and administrative access. Proper preparation helps prevent installation failures and optimizes cluster performance. Once these prerequisites are met, you can proceed with installing Kubernetes and deploying your applications.

Step 1: Prepare the Infrastructure

Select the Deployment Environment

You can deploy Kubernetes in various environments, including:

  • Cloud Providers: AWS, Google Cloud, and Azure offer managed Kubernetes services like EKS, GKE, and AKS.
  • On-Premises: If you prefer more control, you can set up Kubernetes on bare-metal servers or virtual machines.
  • Hybrid Cloud: A mix of cloud and on-premises resources for greater flexibility.
Read related post  Google Cloud Platform: Cloud Services Overview

Configure the Nodes

  1. Set Hostnames: Assign unique hostnames to all nodes (e.g., master-node, worker-node-1, worker-node-2).
  2. Update System Packages: Run apt update && apt upgrade -y on Debian-based systems or yum update -y on CentOS/RHEL.
  3. Disable Swap: Kubernetes requires swap to be disabled. Use swapoff -a and remove swap entries from /etc/fstab.
  4. Enable IP Forwarding: Run echo '1' > /proc/sys/net/ipv4/ip_forward and persist the setting in /etc/sysctl.conf.

Step 2: Install Kubernetes Components

Kubernetes requires three main components to run: kubeadm, kubelet, and kubectl.

Install Kubernetes on All Nodes

  1. Add the Kubernetes Repository

curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add –
echo “deb http://apt.kubernetes.io/ kubernetes-xenial main” | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt update

Install Required Packages

sudo apt install -y kubeadm kubelet kubectl

Hold Package Versions

sudo apt-mark hold kubeadm kubelet kubectl

Step 3: Initialize the Kubernetes Cluster

On the master node, initialize the cluster using:

sudo kubeadm init –pod-network-cidr=192.168.1.0/16

After initialization, configure kubectl for the current user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Step 4: Set Up Networking

A Kubernetes network plugin is required for communication between pods. Popular options include:

  • Calico

kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

Flannel

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Step 5: Add Worker Nodes to the Cluster

Each worker node needs to join the cluster using the token generated during initialization. On each worker node, run:

sudo kubeadm join <master-ip>:6443 –token <token> –discovery-token-ca-cert-hash sha256:<hash>

If you lost the token, regenerate it on the master node:

sudo kubeadm token create –print-join-command

Step 6: Verify the Cluster

Check if all nodes are connected and running:

kubectl get nodes

If the setup was successful, you should see all nodes in the Ready state.

Step 7: Deploy a Test Application

To ensure Kubernetes is working, deploy a simple Nginx pod:

kubectl create deployment nginx –image=nginx
kubectl expose deployment nginx –port=80 –type=NodePort

Find the assigned NodePort and access Nginx using http://<node-ip>:<node-port>.

Kubernetes Security Best Practices

Securing a Kubernetes cluster is critical to preventing unauthorized access, data breaches, and system vulnerabilities. While Kubernetes offers built-in security features, proper configuration is essential to ensure your cluster remains protected.

1. Use Role-Based Access Control (RBAC) to Limit Permissions

Role-Based Access Control (RBAC) is a Kubernetes security mechanism that restricts access to cluster resources based on user roles. Without RBAC, any user with access to the cluster could perform high-privilege actions, increasing the risk of accidental misconfigurations or malicious activity.

How RBAC Works

RBAC in Kubernetes is managed through four main components:

  • Roles: Define a set of permissions within a specific namespace.
  • ClusterRoles: Define permissions at the cluster-wide level.
  • RoleBindings: Assign a role to a user or service account within a namespace.
  • ClusterRoleBindings: Assign a ClusterRole to users or groups across the entire cluster.
Read related post  The Importance of Monitoring in DevOps

Implementing RBAC

To create a Role that allows only read access to Pods in a namespace, use:

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: pod-reader
rules:
– apiGroups: [“”]
resources: [“pods”]
verbs: [“get”, “list”, “watch”]

To bind this Role to a specific user (e.g., developer), create a RoleBinding:

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: read-pods
namespace: default
subjects:
– kind: User
name: developer
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: pod-reader
apiGroup: rbac.authorization.k8s.io

Best Practices for RBAC

  • Follow the principle of least privilege—grant users only the permissions they need.
  • Regularly audit RoleBindings and ClusterRoleBindings to remove unnecessary permissions.
  • Use service accounts instead of default accounts for workloads.

2. Enable Pod Security Policies to Restrict Container Privileges

Pod Security Policies (PSP) define rules for how pods are allowed to operate within the cluster. They prevent insecure configurations, such as running privileged containers or using host networking.

Note: Pod Security Policies were deprecated in Kubernetes 1.21 and removed in 1.25. Instead, use Pod Security Admission (PSA) or third-party tools like Kyverno or OPA/Gatekeeper.

Example Pod Security Policy (Deprecated)

apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: restricted
spec:
privileged: false
allowPrivilegeEscalation: false
hostNetwork: false
runAsUser:
rule: MustRunAsNonRoot
seLinux:
rule: RunAsAny

Pod Security Admission (PSA) Alternative

PSA enforces security at the namespace level with three predefined modes:

  • Privileged: Allows full capabilities (not recommended for production).
  • Baseline: Prevents privileged container execution but allows common configurations.
  • Restricted: Enforces strict security policies for pods (recommended for secure environments).

To apply Restricted mode to a namespace:

kubectl label namespace my-namespace pod-security.kubernetes.io/enforce=restricted

Best Practices for Pod Security

  • Use Restricted mode for production workloads.
  • Prevent containers from running as root.
  • Restrict the use of host networking and ports.
  • Enforce read-only root filesystems for stateless applications.

3. Use Network Policies to Control Traffic Between Pods

By default, Kubernetes allows all pods to communicate with each other, which can be a security risk. Network Policies allow administrators to define rules controlling traffic between pods, namespaces, and external resources.

Example: Allowing Traffic Only From Specific Pods

To restrict traffic so that only pods with the label app=frontend can communicate with app=backend:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: backend-policy
namespace: default
spec:
podSelector:
matchLabels:
app: backend
policyTypes:
– Ingress
ingress:
– from:
– podSelector:
matchLabels:
app: frontend
ports:
– protocol: TCP
port: 80

Best Practices for Network Policies

  • Use a deny-by-default approach, allowing only necessary traffic.
  • Segment workloads using namespaces and apply specific policies to each.
  • Monitor network flows to detect unauthorized traffic patterns.
  • Use a CNI plugin that supports Network Policies, such as Calico or Cilium.

4. Regularly Update Kubernetes and Container Images

Outdated Kubernetes components and container images pose significant security risks due to unpatched vulnerabilities. Regular updates help protect against known exploits.

Updating Kubernetes Components

  1. Check the current Kubernetes version:kubectl version –short
  2. Update kubeadm on the control plane:sudo apt update && sudo apt install -y kubeadm sudo kubeadm upgrade plan sudo kubeadm upgrade apply v1.x.x # Replace with the latest version
  3. Update kubelet and kubectl on all nodes:sudo apt install -y kubelet kubectl sudo systemctl restart kubelet

Updating Container Images

  • Use official images from trusted sources to avoid supply chain attacks.
  • Scan images for vulnerabilities using tools like Trivy or Clair.
  • Enable image signing and verification with tools like Cosign.
  • Implement automated updates using GitOps tools like ArgoCD or FluxCD.

Best Practices for Updates

  • Regularly check the Kubernetes release calendar and plan upgrades accordingly.
  • Apply security patches immediately to prevent exploitation of known CVEs.
  • Test updates in a staging environment before deploying to production.

Securing a Kubernetes cluster requires multiple layers of protection, including access controls, pod security, network restrictions, and regular updates. By implementing RBAC, Pod Security Admission, Network Policies, and continuous updates, you can significantly reduce the risk of security breaches.

Setting up a Kubernetes cluster involves configuring infrastructure, installing key components, and ensuring secure networking.

Leave a Reply

Your email address will not be published. Required fields are marked *