Cloud Cloud

quick kubernetes install on ubuntu using k0s

Home

Blog

quick kubernetes install on ubuntu using k0s

user
Josphat Mutai
Jul 12, 2025

Quick Kubernetes Install on Ubuntu using k0s

k0s (pronounced "K-zero-ess"), is a lightweight kubernetes distribution created by Mirantis team as the perfect deployment option for edge computing and IoT use cases. It enables you to install and run kubernetes cluster in low low resource infrastructure such as Raspberry Pi hardware. k0s is certified 100% upstream Kubernetes.

In this blog post we shall perform the installation of k0s kubernetes cluster on two Ubuntu Linux machines. One will function as control plane node, and the other node as worker node. But for normal use cases you can use both to run containerized workloads.

Install Kubernetes on Ubuntu using k0s

Ensure your Ubuntu Linux machines are updated:

sudo apt update
sudo apt upgrade -y

Our installations are based on the following host definitions:

Hostname IP Address
k0snode01.cloudspinx.com 192.168.20.38
k0snode02.cloudspinx.com 192.168.20.39

Create Control Plane node

After the update, login to the first Ubuntu machine:

Then perform installation of k0s by running the following command:

curl --proto '=https' --tlsv1.2 -sSf https://get.k0s.sh | sudo sh

The script will download the latest stable version of k0s and make it executable from /usr/local/bin/k0s.

To be able to extend the cluster in future to support multi-node setup, you can omit --single flag which basically disable features needed for multi-node clusters:

sudo k0s install controller --enable-worker --no-taints

If you don't want this node to run application container, you can remove --enable-worker --no-taints

After the execution, start the k0s service and enable it to automatically start on system reboot by running the following commands:

sudo k0s start

Check k0s status by running:

$ sudo k0s status
Version: v1.32.2+k0s.0
Process ID: 2489
Role: controller
Workloads: true
SingleNode: false
Kube-api probing successful: true
Kube-api probing last error:  

Now that cluster looks healthy, you can manage it using kubectl command:

$ sudo k0s kubectl get nodes
NAME                       STATUS   ROLES           AGE     VERSION
k0snode01.cloudspinx.com   Ready    control-plane   3m20s   v1.32.2+k0s

Adding Worker nodes

After the control plane node is activated, additional worker nodes can be added. For this, a join token required for joining workers to the cluster. The token embeds information that enables mutual trust between the worker and controller(s) and which allows the node to join the cluster as worker.

The token can be obtained by running the following command on the controller nodes:

sudo k0s token create --role=worker

You will get output similar to:

H4sIAAAAAAAC/2xV3Y7iPBa876fgBWY+J5DWB9JeTMAGAjFjx8cG34U4Q2g7iTsJv6t991Uzs9KutHcnp46qIqtUlfuzLLv+3Daz0TV4K9ylH8qun719G/2ZZ2+j0WhUlN1w/nUu8qH8ll+Gqu3Ow+ObyYd8NtpmaNhmwZyDScQ5XnCZQAY6YYgAf2FomNsgyYDHHNMFU9JrRMYZJLFGTh5qutc2adRyaFIVfOaB3+oa3eUi0WrvH2ALZBDnCvz2GPgkX+kPYQnncNswR2KDDObSUObIigkiOZCJgKhKA+2VqzZG+gPU949SkE8leaxBj9OV20lM3oXkpMRTYIJQjpOKN8WtPPnfGPDuvzEBSVNi/cmknGsUKWYJlZjvzYLoo9TRoQ5USvxB7LlOrRzr0JAcgjjFZgFjspo7mhw+KAaQcyaJZIBRBgk2iLxuM0SuX28CwCu+StwxNDkAfR5RBAxVeyOIZfukE0A/eU3UMfBNropNaaMdtVHFnmQjPuiFjddhYe/vLPR9vud5AfdtoeQaBJsUKAoVcTZfkfaApu8FSs4lqSJl1+EWDeccD6ui0S4/+TB9apfiu0sXlTjCPQKUrDPJtwyGWyZOoV7egpRUvZJ0YZwfNJJXtV/fhHTb44J4gSyiC3IXy2EhHcfmPNznH0l1HPNnutI0HfOucD86HZJAOfkJyHuh3MR88FsBgRKWtmmQJPrcj/VjWB5tey0x+VSrosuwg+JJRRHS0Cy/fOAuecOe/ClbABdRMJGpq9QAr9NA8kxwsglpJB/T2w5TkjYcw3K63+1dzpdkK2qeMJmsC1RFJnD1Ed3G5ocfq5rjwsn9jvhar9xC1uTdAEwKXKUijKqjqHq993PVJM+diBfiMb2mIbqlUndyIfODJEumDGWWxDz0PLdJPLdBTLGJUyktXxnCztMxgIwNcnNW+xhw0gin12CDOVOHCYcEGPAY4HQVsL5xhO9ScawDsmc2gSzgXtfrDZf2TqHiQvHtAWicPeXZOHkxy0MkMdmx2nSFqrocisg84y8PPrklsUA0/tLXq1hk8vQU5ykpkNvmSz/JTj6ggQsOim6ZS36mMFiK6SAkvacr2vAw4FzG4U4ZTsFv0tXhke0Tn4eEQG1WO2GeouafBpFVsVxPeDbtc5nc5i7Z6do05Z6hozVa7hkCVPEdRoEC09IGHtSS2mD+hBW55eTHo2ioZs6PNbI3vfc7WPYoG/MzAHlSJydFYze6vl+O4B0ovS6WemeAtAqmP8XTtAfUPoyd3o0120PdolL5OUenscEDz5+nsURGlJJeYOXDHOQScNTJBWWHk99l4b2HveQGU2uaNijxMDniKuLNeiIhmh+ErA3hFnCSs3EaMeRvhlBtrAlBkHmpqN4RsubWXk0YnFVj2Nzia7Z0bV4bmy6SQahTwM/9lT31BMKgLpriWjR0DuLL41pxJWmGoONLXvMP0vFs8MfAbeVj2IrQOe2qTkvYmIXuaX2Pfwq0+ZPNRFh2YkhmErtlBjSWGF65vGXtP14B35fdtexmo2oYfD/7669gGn4P3v/+HqLv479n75PJ+G00avK6nI0s6t+KthnK+/C7KH7Pf4riT2u8rr4Wl/71dTmWrhy+Hdt26Icu9//Ldum6shm+/YfptbTnxsxG87b5dT69+a78VXZlU5T9bPTPf719sb7E/5D8H/qX8OsXhtaWzWwURH37GXy39aX6eBSTSXcZwtL6t38HAAD//97pMR0EBwAA

Login to the worker node:

Install k0s

curl --proto '=https' --tlsv1.2 -sSf https://get.k0s.sh | sudo sh

Save token to file:

nano workr-join-token.txt

Then use the token printed to join it to the cluster.

sudo k0s install worker --token-file ./workr-join-token.txt

Start k0s on worker node(s):

sudo k0s status

Check status:

$ sudo k0s status
Version: v1.32.2+k0s.0
Process ID: 2522
Role: worker
Workloads: true
SingleNode: false
Kube-api probing successful: true
Kube-api probing last error:  

On the controller node, list nodes in the cluster:

$ sudo k0s kubectl get nodes
NAME                       STATUS   ROLES           AGE    VERSION
k0snode01.cloudspinx.com   Ready    control-plane   20m    v1.32.2+k0s
k0snode02.cloudspinx.com   Ready    <none>          4m4s   v1.32.2+k0s

From the output, we can confirm the cluster has the two nodes as expected.

Display / write kubeconfig

If you want to display cluster admin's kubeconfig file, run:

sudo k0s kubeconfig admin

To make this the default kubeconfig, write the contents to the ~/.kube/config file.

mkdir ~/.kube
sudo k0s kubeconfig admin > ~/.kube/config

If doing this on a workstation and not controller node, copy the output of k0s kubeconfig admin and save it to ~/.kube/config

Install kubectl and run test deployment

Finally, install kubectl:

  • Linux

    curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
    sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
    
  • macOS (Intel)

    curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/amd64/kubectl"
    chmod +x ./kubectl
    sudo mv ./kubectl /usr/local/bin/kubectl
    sudo chown root: /usr/local/bin/kubectl
    
  • macOS (Apple Silicon)

    curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/darwin/arm64/kubectl"
    chmod +x ./kubectl
    sudo mv ./kubectl /usr/local/bin/kubectl
    sudo chown root: /usr/local/bin/kubectl
    

    Test to ensure the version you installed is up-to-date:

    kubectl version --client
    

    Sample output:

    Client Version: v1.32.2
    Kustomize Version: v5.5.0
    

    We can use kubectl to list all pods in the cluster:

    kubectl  get pods -A
    NAMESPACE     NAME                              READY   STATUS    RESTARTS   AGE
    kube-system   coredns-7d4f7fbd5c-7fm2z          1/1     Running   0          21m
    kube-system   coredns-7d4f7fbd5c-f68qs          1/1     Running   0          21m
    kube-system   konnectivity-agent-5x2b6          1/1     Running   0          38m
    kube-system   konnectivity-agent-mj4xn          1/1     Running   0          21m
    kube-system   kube-proxy-h8npl                  1/1     Running   0          21m
    kube-system   kube-proxy-z9km9                  1/1     Running   0          38m
    kube-system   kube-router-5p2td                 1/1     Running   0          21m
    kube-system   kube-router-gmxrd                 1/1     Running   0          38m
    kube-system   metrics-server-7778865875-srpxv   1/1     Running   0          38m
    

    Create hello-world-deploy.yaml file to confirm you can run new applications in the cluster.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-world
spec:
  replicas: 2
  selector:
    matchLabels:
      app: hello-world
  template:
    metadata:
      labels:
        app: hello-world
    spec:
      containers:
        - name: hello-world
          image: nginxdemos/hello
          ports:
            - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: hello-world-service
spec:
  selector:
    app: hello-world
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
  type: NodePort

Create resources:

kubectl apply -f hello-world-deploy.yaml

Expected output:

deployment.apps/hello-world created
service/hello-world-service created

List all objects created in the current namespace:

kubectl  get all

Sample output:

NAME                               READY   STATUS    RESTARTS   AGE
pod/hello-world-7b5dbb9789-b55q6   1/1     Running   0          71s
pod/hello-world-7b5dbb9789-n62q9   1/1     Running   0          71s

NAME                          TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
service/hello-world-service   NodePort    10.104.156.177   <none>        80:30944/TCP   71s
service/kubernetes            ClusterIP   10.96.0.1        <none>        443/TCP        44m

NAME                          READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/hello-world   2/2     2            2           71s

NAME                                     DESIRED   CURRENT   READY   AGE
replicaset.apps/hello-world-7b5dbb9789   2         2         2       71s

Open your browser and access http://nodeip:30944

Uninstall k0s

Follow the following steps to removal of k0s.

  1. Stop k0s service
sudo k0s stop

Once the service is stopped, run the reset command:

sudo k0s reset

The k0s reset command will perform the clean up of containers, data directories, mounts and network namespaces.

Reboot the system.

sudo reboot

After a reboot, k0s can be reinstalled if desired.