Get started with k3s
k3s configuration and installation using the Always Free Ressources from Oracle Cloud Infrastructure and Ansible to deploy a full (and free) Kubernetes cluster.
Using the Always Free Ressources from Oracle Cloud Infrastructure require to set a NAT Gateway to gives cloud resources without public IP addresses access to the internet without exposing those resources to incoming internet connections.
But the NAT Gateway is no more available from the Always Free Ressources ; so the other node can't access to the internet with an ephemeral private address (to update linux packages or download k3s for instance).
Primary solution is just to setup a Single Node Cluster, while searching a workaround.
Minimum requirementsโ
The following are the minimum CPU and memory requirements for nodes in a high-availability k3s server:
DEPLOYMENT SIZE | NODES | VCPUS | RAM |
---|---|---|---|
Small | Up to 10 | 2 | 4 GB |
Medium | Up to 100 | 4 | 8 GB |
Large | Up to 250 | 8 | 16 GB |
X-Large | Up to 500 | 16 | 32 GB |
XX-Large | 500+ | 32 | 64 GB |
k3s is a container orchestrator for Kubernetes. Is lightweight, easy to use and easy to deploy.
See documentation about k3s.
The Oracle's Always Free Ressources can be used to create a k3s cluster on Oracle Cloud platform.
See documentation about Oracle - Always Free Ressources.
๐ Preparationโ
Preparation of the control plane node.
Copy the ssh key into the serverโ
From your local machine, copy the ssh key into the server. It'll be used by Ansible to connect to other nodes.
Export the $PRIVATE_KEY
and $OCI_SERVER_URL
environment variables on your local machine.
scp -i $PRIVATE_KEY $PRIVATE_KEY ubuntu@$OCI_SERVER_URL:~/.ssh/
Update and upgrade Ubuntuโ
sudo apt update && \
sudo apt upgrade -y
sudo reboot
The
reboot
may be needed to apply the new kernel.
Install Ansible and prepare other nodes for Kubernetes installationโ
sudo apt install software-properties-common -y && \
sudo add-apt-repository --yes --update ppa:ansible/ansible && \
sudo apt install ansible -y
Create a ~/.ansible.cfg
file in user's home directory:
[defaults]
host_key_checking = False
Modify the ansible's hosts file, located at /etc/ansible/hosts
:
sudo vim /etc/ansible/hosts
For instance, add the following line in the Ansible's hosts
file, to create the workers
group:
# My inventory file is located in /etc/ansible/hosts on the cluster.
[workers]
ampere-1 ansible_host=10.0.X.X ansible_ssh_private_key_file=~/.ssh/private.key
amd-0 ansible_host=10.0.X.X ansible_ssh_private_key_file=~/.ssh/private.key
amd-1 ansible_host=10.0.X.X ansible_ssh_private_key_file=~/.ssh/private.key
Test connection to the nodes:
ansible workers -m ping
Run Ansible playbooksโ
Update and upgrade all nodes, direct link to the the yaml file:
ansible-playbook ./ansible/upgrade.playbook.yaml
Configure firewall rules, direct link to the the yaml file:
ansible-playbook ./ansible/firewall.playbook.yaml
From the official Kubernetes documentation, CronJobs use the time zone of the control plane node. So, if the control plane node is in a different time zone than the worker nodes, CronJobs will not run at the expected time.
To avoid this, you can set the time zone of the control plane node to whatever time zone you want to use. For instance:
sudo timedatectl set-timezone Europe/Paris
Misc. configuration (like timezone setup), direct link to the the yaml file:
ansible-playbook ./ansible/config.playbook.yaml
โ๏ธ Installationโ
Common way to install k3sโ
curl -sfL https://get.k3s.io | sh -s - --write-kubeconfig-mode 644
--write-kubeconfig-mode 644
is used to chown
the KUBECONFIG
file at install time.
You can follow the k3s installation by executing this command: sudo tail -f /var/log/syslog
.
If everything is well, you should see the following message:
Running load balancer 127.0.0.1:6444 -> [<CONTROL_PLANE_IP>:6443]
Installation optionsโ
For instance: install k3s using Docker as container system, do not deploy Traefik Ingress and chown of the /etc/rancher/k3s/k3s.yaml
file:
curl -sfL https://get.k3s.io | sh -s - --write-kubeconfig-mode 644 --no-deploy traefik --docker
Optional - Add the following line to your
~/.bashrc
or~/.zshrc
file (used by Helm):
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
- Active the bash auto-completion for
kubectl
.
source /usr/share/bash-completion/bash_completion && \
kubectl completion bash | sudo tee /etc/bash_completion.d/kubectl > /dev/null && \
echo 'alias k=kubectl' >>~/.bashrc && \
echo 'complete -o default -F __start_kubectl k' >>~/.bashrc
Then, exit and reconnect to the control plane node to active bash completion
.
Install k3s as agent on the worker nodeโ
Retrieve the node_token
from the control plane k3s server. The node_token
is used to identify the node in the cluster.
sudo cat /var/lib/rancher/k3s/server/node-token
Before doing the next step, you can check live if the worker nodes are connected to the control plane node by doing:
watch --interval 1 kubectl get nodes -o=wide
And then, open a new terminal to continue.
Use Ansible to connect other nodes to the control plane nodeโ
ansible workers -v -m shell -a "curl -sfL https://get.k3s.io | K3S_URL=https://<CONTROL_PLANE_NODE_IP>:6443 K3S_TOKEN=<TOKEN> sh -"
(Optional) Taint control plane node (don't deploying pods on it)โ
For an high-availability cluster, the control plane node is the node that is responsible for managing the cluster. It's optimal to taint the control plane node to avoid deploying pods on it and let workers take over. This is done by adding a NoSchedule
taint to the control plane node.
NoSchedule
taint is added to the node with the following command:
kubectl taint node <control-plane-node> node-role.kubernetes.io/control-plane:NoSchedule
Command to untaint it (just put a -
at the end of the command):
kubectl taint node <control-plane-node> node-role.kubernetes.io/control-plane:NoSchedule-
Starting in v1.20, node-role.kubernetes.io/master:NoSchedule
taint is deprecated in favor of node-role.kubernetes.io/control-plane
and will be removed in v1.25.
๐งช Testingโ
Test the full deployment of the cluster by deploying a simple whoami
application:
kubectl create deployment whoami --image=containous/whoami
kubectl expose deployment whoami --port=80 --type=NodePort
kubectl get svc
Then, copy the NodePort of the exposed service and access it: http://<EXTERNAL_IP>:<NODE_PORT>
๐ Well done! ๐
Your Kubernetes cluster with ...
- k3s,
- Ansible
- on Oracle Cloud Infrastructure with Always Free Ressources
... is working!
โน๏ธ More informationโ
Install Rancher on a single k3s node clusterโ
This installation method will not work if you have more than one k3s node cluster. Rancher will show just one k3s node cluster. Even if you have more than one k3s node cluster. If you want to install Rancher on a multi-node cluster, you will need to install using helm.
docker run --name rancher -d --restart=unless-stopped -p 80:80 -p 443:443 --privileged rancher/rancher:v2.5.9-head
See Installing Rancher on a Single Node Using Docker documentation for more information.
๐ Troubleshootingโ
Unreachable 80 and 443 portsโ
After a reboot or an upgrade of the control plane node, you may have a problem with the 80 and 443 ports, it's probably because of Traefik load balancer and the external IP of the control plane node.
Solution is to patch the Traefik deployment with the new external IP relative to your internal control plane node IP.
First, found the internal IP of the control plane node:
kubectl get nodes -o wide
Then, patch the Traefik deployment with the new external IP:
kubectl patch svc traefik -n kube-system -p '{"spec":{"externalIPs":["<INTERNAL-OR-PRIVATE-NODE-IP>"]}}'
๐ฅ Uninstallโ
k3s is installed with built-in scripts to uninstall and remove all contents.
To kill all resources from a server nodeโ
The killall
script cleans up containers, k3s directories, and networking components while also removing the iptables chain with all the associated rules.
The cluster data will not be deleted.
/usr/local/bin/k3s-killall.sh
To uninstall k3s from a server nodeโ
/usr/local/bin/k3s-uninstall.sh
To uninstall k3s from an agent nodeโ
/usr/local/bin/k3s-agent-uninstall.sh