How to deploy a Linode instance with Kubernetes using Terraform Part 2
At the end of Part 1, we successfully deployed the Linode instance. Welcome to the second part of setting up the Linode instance with Kubernetes using Terraform.
Let's begin by editing the resource "linode_instance". We will add a provisioner of type "remote-exec". This will allow us to run commands on a remote machine via SSH. It will use the private key for authentication which is retrieved from the private key resource we created earlier.
Previous Post: How to deploy a Linode instance with Kubernetes using Terraform Part 1 (phiptech.com)
- Copy/Replace the below code into your existing main.tf
1. Add the provisioner to execute commands
main.tf
resource "linode_instance" "master" {
label = "k8s-master"
region = var.region
type = var.linode_type
disk {
label = "boot"
size = 50000
image = "linode/ubuntu20.04"
root_pass = var.root_password
authorized_keys = [chomp(tls_private_key.ssh.public_key_openssh)]
}
config {
label = "boot-existing-volume"
kernel = "linode/latest-64bit"
devices {
sda { disk_label = "boot" }
sdb { volume_id = linode_volume.data_volume.id }
}
}
provisioner "remote-exec" {
connection {
type = "ssh"
user = "root"
host = linode_instance.master.ip_address
private_key = tls_private_key.ssh.private_key_pem
}
inline = [
#Config disks
"echo Formatting disk and mounting to /mnt/data",
"sudo mkfs.ext4 -F /dev/sdb",
"sudo mkdir /mnt/data",
"sudo mount /dev/sdb /mnt/data",
"sudo echo /dev/sdb /mnt/data ext4 defaults 0 0 | sudo tee -a /etc/fstab",
"sudo apt-get update",
"sudo apt-get install -y docker.io", # Install Docker
"sudo apt-get update && apt-get install -y apt-transport-https curl",
"sudo curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add", ## Adding the Kubernetes GPG Key on each ubuntu system
"echo \"deb https://apt.kubernetes.io/ kubernetes-xenial main\" >> ~/kubernetes.list",
"sudo mv ~/kubernetes.list /etc/apt/sources.list.d",
"sudo apt-get install ca-certificates gnupg lsb-release -y", #Set up Repository. Update the apt package index and install packages to allow apt to use a repository over HTTPS:
"sudo mkdir -m 0755 -p /etc/apt/keyrings", #1. Add Docker’s official GPG key:
"sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg", #2. Add Docker’s official GPG key:
"echo deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null",
"sudo apt remove containerd -y", # Remove the old containerd
"sudo apt update",
"sudo apt install containerd.io -y", #install new containerd
"sudo rm /etc/containerd/config.toml", #Remove the installed config file
"sudo systemctl restart containerd", #restart container
"sudo add-apt-repository -s 'deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable'",
"sudo apt-get install docker-ce docker-ce-cli containerd.io -y",
"sudo apt-get update",
"sudo apt-get install -y kubelet kubeadm kubectl",
"sudo swapoff -a", #Kubernetes does not like swap https://www.edureka.co/blog/install-kubernetes-on-ubuntu
"sudo kubeadm init --pod-network-cidr=10.244.0.0/16",
#Post Config
"mkdir -p $HOME/.kube",
"sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config",
"sudo chown $(id -u):$(id -g) $HOME/.kube/config",
"sudo hostnamectl set-hostname kubernetes-master",
#Firewall Rules
"sudo ufw allow 6443",
"sudo ufw allow 6443/tcp",
#Container Network installation
"kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml",
]
}
}
I will provide a breakdown of the commands below, along with their purpose. To enhance readability, I will organize the commands into separate sections.
We have the disk configuration. You may recall we defined a volume resource with 30GB in part 1. This section of the commands however is not used at all in the Kube config. The plan was to create a persistent volume which I'll cover in another post.
echo Formatting disk and mounting to /mnt/data
sudo mkfs.ext4 -F /dev/sdb
sudo mkdir /mnt/data
sudo mount /dev/sdb /mnt/data
sudo echo /dev/sdb /mnt/data ext4 defaults 0 0 | sudo tee -a /etc/fstab
- Firstly, it will format the disk /dev/sdb as an ext4 filesystem with the force command, create a new directory named "/mnt/data" then mount the disk to the directory. It will also add an entry to fstab which it will automaticallymount the disk on startup
This next section will update the package list with the latest version information from the repositories. Install docker as the container runtime and install the HTTPs repositories and curl.
sudo apt-get update
sudo apt-get install -y docker.io
sudo apt-get update && apt-get install -y apt-transport-https curl
The following commands are adding the apt repository which I sourced from Install Docker Engine on Ubuntu under the section "Install using the apt repository" You need to do this when you install Docker.
sudo curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add, ## Adding the Kubernetes GPG Key on each ubuntu system
echo \"deb https://apt.kubernetes.io/ kubernetes-xenial main\" >> ~/kubernetes.list
sudo mv ~/kubernetes.list /etc/apt/sources.list.d
sudo apt-get install ca-certificates gnupg lsb-release -y #Set up Repository. Update the apt package index and install packages to allow apt to use a repository over HTTPS:
sudo mkdir -m 0755 -p /etc/apt/keyrings #1. Add Docker’s official GPG ke sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg", #2. Add Docker’s official GPG key:
echo deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
The next commands are required for us to join the worker node to the master. Otherwise, we get the error shown below(containerd error).
sudo apt remove containerd -y # Remove the old containerd
sudo apt update
sudo apt install containerd.io -y #install new containerd
sudo rm /etc/containerd/config.toml #Remove the installed config file
sudo systemctl restart containerd #restart container
sudo add-apt-repository -s 'deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable'
sudo apt-get install docker-ce docker-ce-cli containerd.io -y

This particular error is due to an issue in Ubuntu 20.04 where it uses an old version of containerd which is not compatible with the kube version I'm installing. To get around this, I've followed some articles that have advised to reinstall containerd which seems to have resolved this.
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR CRI]: container runtime is not running: output: time="2023-03-29T07:17:18Z" level=fatal msg="validate service connection: CRI v1 runtime API is not implemented for endpoint "unix:///var/run/containerd/containerd.sock": rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService"
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with--ignore-preflight-errors=...
To see the stack trace of this error execute with --v=5 or higher
I've linked some references to this as it looks to be very common. The links below helped me resolve this.


Next, we have commands which will install Kubernetes and switch off swap as recommended which apparently causes performance and stability issues.
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo swapoff -a", #Kubernetes does not like swap
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
apt-get
will update the package list again and then install the Kubernetes control plane components.
kubelet
is the primary node agent which runs on each worker node.
kubeadm
is a tool used for setting up and managing the cluster.
kubectl
is the command-line interface for interacting with Kubernetes.
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
will initialize a new Kubernetes cluster. It will specify the IP address range to use for the pod network which allows communication with other nodes.
And lastly, we run these commands:
#Post Config
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
sudo hostnamectl set-hostname kubernetes-master
#Firewall Rules
sudo ufw allow 6443
sudo ufw allow 6443/tcp
#Container Network installation
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
You need to run these commands to configure kubectl
to access the Kubernetes API server on the local machine. By default, kubectl
looks for the Kubernetes configuration file ~/.kube/config
which is why we copy the config file. We then set the permissions to ensure it has the necessary permissions to access the Kubernetes cluster.
After that, we can configure the firewall rules. Port 6443 is used to communicate with Kubernetes API. We also install a networking solution called Flannel. This contains the configuration to set up the network overlay. Network Overlay is a virtual network that's created on top of the existing physical network infrastructure.
2. Apply and Verify the configuration
Now all that's left is to apply the terraform code and test the configuration.
- In your console, type in terraform apply
terraform apply -var-file="terraform.tfvars"

- It shouldn't take too long to apply. The Final output will look like the below:

- Log into the host using SSH and verify the node status. This can be done using the ssh command in Ubuntu. The password is from the code in variables.tf file we created earlier. Username is root as set by default
variable "root_password" {
type = string
default = "p@assw0rd!@#$%"
}

To check the status of the nodes in your Kubernetes cluster, you can use the kubectl get nodes
command. This command will show you a list of all the nodes in the cluster, along with their status. Enter this command in the Kubernetes master node

Great, the master node is now up. All that's left is to configure the worker node and join it to the control plane which we haven't created yet. I will do this in the next part.
If you no longer need the Linode instance, use the below command to destroy it.
terraform destroy -var-file="terraform.tfvars"
Continue to Part 3:
Found this article useful? Why not buy Phi a coffee to show your appreciation.