Yes, deploying a single node Kubernetes cluster using K3s is easy, fast and hassle free. On top of all these K3s is meant for light weight K8s workloads hence it doesn't need much system resources. Anyone who wishes to learn K8s and practice using kubectl could get the K3s deployed on a VM running with 4 GB and practice. The deployment process of K3s is so simple, it is hardly a matter of running 3/4 commands. The Kubernetes single node cluster would be ready to practice in 2 minutes.
K3s is a Lightweight Certified Kubernetes Distribution from SUSE which is open source. Similarly, there is RKE (Rancher Kubernetes Engine) and RKE2 which are also different flavors by SUSE under the open source umbrella. However, SUSE Rancher is the SUSE's flagship and commercial version which supports management of different K8s clusters which can be onboarded from cloud, on-premises or even deploying new K8s setup on bare metal, cloud, or on-premises environments. We can truly call it as K8s Orchestration.
There is no abbreviation for K3s. Just to reflect the simplicity of it and its lesser footprint it is called K3s. This is what is defined in the Rancher.com site for K3s:
Host VM Details
Here, I will be using the below VM which is running with RHEL8.6 on 4GB RAM, Single vCPU as shown below:
Lets deploy Single Node K3s Now
Installing K3s is literally running a single command as shown below and nothing more than that (not necessary to mention that it needs internet connectivity):
# curl -sfL https://get.k3s.io | sh -
It got installed in no span of time. The service started and was enabled. Nothing to be done from our end. All set!
Let’s verify if K3s Service up and started
# systemctl status k3s.service
Let's verify if Kubernetes node is ready
Install K3s would install the kubectl command line utility along with k3s, k3s-killall.sh & k3s-uninstall.sh scripts/utilities which are essential in managing the K3s Kubernetes clusters.
Lets verify if the Single Node Kubernetes cluster is up with all required kube-system pods:
Let's create a simple Ngnix pod and check if it works
Let’s deploy/create a simple Ngnix pod now and test if that works. I’ve used a deployment.yaml file here which contains the Ngnix details such as image name, port to be opened etc,. I’ve used the deployment.yaml file as shown below here to create this pod:
Creating a simple Ngnix pod now:
Yes, it works!
I’m not going to show more in-depth into the K3s or managing pods here. It is only basics showing how easily we can install/deploy a simple K3s Kubernetes cluster and work on it.
Un-installing K3s Cluster
Un-installation is made easy with readily available scripts. First, run k3s-killall.sh and then execute k3s-uninstall.sh scripts. That’s it!
All the best! Happy Learning!