Basics
Kubernetes/k3s is using the Container Storage Interface (short CSI) to provide persistent storage. More details about that can be found here.
Options
By default k3s/rancher ships with a storage class called “local-path“.
According to k3s documentation an alternative is “longhorn” (see here).
But there are plenty of other options based on NFS, Ceph (both CephFS and RBD), Gluster (quite old), and many more.
For the very first steps I just used local-path, but for a clustered setup you’ll most likely want a solution that can be used by multiple nodes to allow failover in case of node errors.
For now I went for NFS CSI: setup is relatively easy (and I already have a NFS server at hand).
Maybe I’ll also give the proxmox CSI a try as my current test environment is running on Proxmox PVE anyway, so this could be a nice match.
Preparations
First let’s check our version of k3s as some storage drivers need specific versions (depending on the server version):
linux # kubectl version
Client Version: v1.35.3
Kustomize Version: v5.7.1
Server Version: v1.34.5+k3s1
Turns out that’s not necessary for the following examples (but may matter for other CISs, so I kept it anyway).
Option 1: local-path
As mentioned above k3s comes with a pre-defined local-path storage CSI. A simple example to create persistent volume claim (PVC) is described in the docs:
linux # cat pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: local-path-pvc
namespace: default
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-path
resources:
requests:
storage: 2Gi
linux # kubectl apply -f pvc.yaml
This will create a 2Gi storage claim called “local-path-pvc“.
Site note: I was able to store much larger files there, so it looks like the storage size is (currently?) not monitored/enforced by the local-path CSI. So better not rely on this setting.
Attaching that PVC to a pod/deployment looks like this:
linux # cat deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: storage-test
spec:
selector:
matchLabels:
app: storage-test
replicas: 1
template:
metadata:
labels:
app: storage-test
spec:
containers:
- name: nginx
image: nginx:stable-alpine
imagePullPolicy: IfNotPresent
volumeMounts:
- name: volv
mountPath: /data
ports:
- containerPort: 80
volumes:
- name: volv
persistentVolumeClaim:
claimName: local-path-pvc
linux # kubectl apply -f deployment.yaml
This will create a volume named “volv” from the PVC called “local-path-pvc” and mount it to “/data” with in the pod.
On k3s/rancher the default storage location is /var/lib/rancher/k3s/storage/pvc-<uuid>_default_local-path-pvc on the node the pod gets scheduled on.
Always keep in mind that this storage is node-local, so pods/deployments using persistent storage located on local-path cannot be auto-migrated to other nodes in case of failure.
Option 2: NFS
Installation is done using kubectl or helm like described here:
linux # curl -skSL https://raw.githubusercontent.com/kubernetes-csi/csi-driver-nfs/v4.13.1/deploy/install-driver.sh | bash -s v4.13.1 --
Installing NFS CSI driver, version: v4.13.1 ...
serviceaccount/csi-nfs-controller-sa created
serviceaccount/csi-nfs-node-sa created
clusterrole.rbac.authorization.k8s.io/nfs-external-provisioner-role created
clusterrolebinding.rbac.authorization.k8s.io/nfs-csi-provisioner-binding created
clusterrole.rbac.authorization.k8s.io/nfs-external-resizer-role created
clusterrolebinding.rbac.authorization.k8s.io/nfs-csi-resizer-role created
csidriver.storage.k8s.io/nfs.csi.k8s.io created
deployment.apps/csi-nfs-controller created
daemonset.apps/csi-nfs-node created
NFS CSI driver installed successfully.
Check installation success:
linux # kubectl -n kube-system get pod -o wide -l app=csi-nfs-controller
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
csi-nfs-controller-645f5d9c49-7bmh2 5/5 Running 0 39s 192.168.1.101 k3s-2 <none> <none>
linux # kubectl -n kube-system get pod -o wide -l app=csi-nfs-node
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
csi-nfs-node-dt5pg 3/3 Running 0 65s 192.168.1.101 k3s-2 <none> <none>
csi-nfs-node-g9jc6 3/3 Running 0 65s 192.168.1.100 k3s-1 <none> <none>
linux # kubectl get csidrivers
NAME ATTACHREQUIRED PODINFOONMOUNT STORAGECAPACITY TOKENREQUESTS REQUIRESREPUBLISH MODES AGE
nfs.csi.k8s.io false false false <unset> false Persistent 3m52s
Ok, this looks good. However there’s no storage class yet:
linux # kubectl get storageclasses
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
local-path (default) rancher.io/local-path Delete WaitForFirstConsumer false 19d
So we need to create such a class first (details s. here):
linux # kubectl create secret generic nfs-csi-mount-options --from-literal mountOptions="nfsvers=4.1"
secret/nfs-csi-mount-options created
linux # cat nfs-csi.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-csi
provisioner: nfs.csi.k8s.io
parameters:
server: nfs.mydomain.de
share: /kubernetes
# csi.storage.k8s.io/provisioner-secret is only needed for providing mountOptions in DeleteVolume
csi.storage.k8s.io/provisioner-secret-name: "nfs-csi-mount-options"
csi.storage.k8s.io/provisioner-secret-namespace: "default"
reclaimPolicy: Delete
volumeBindingMode: Immediate
allowVolumeExpansion: true
mountOptions:
- nfsvers=4.1
linux # kubectl apply -f nfs-csi.yaml
storageclass.storage.k8s.io/nfs-csi created
Now let’s check again:
linux # kubectl get storageclasses
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
local-path (default) rancher.io/local-path Delete WaitForFirstConsumer false 19d
nfs-csi nfs.csi.k8s.io Delete Immediate true 2m45s
The install instructions also contain snapshot support, however when I tried to install it, I encountered an error:
linux # cat nfs-snapshots-csi.yaml
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
name: csi-nfs-snapclass
driver: nfs.csi.k8s.io
deletionPolicy: Delete
parameters:
# csi.storage.k8s.io/snapshotter-secret is only needed for providing mountOptions in DeleteSnapshot
csi.storage.k8s.io/snapshotter-secret-name: "nfs-csi-mount-options"
csi.storage.k8s.io/snapshotter-secret-namespace: "default"
linux # kubectl apply -n kube-system -f nfs-snapshots-csi.yaml
error: resource mapping not found for name: "csi-nfs-snapclass" namespace: "" from "nfs-snapshots-csi.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
Ok, so first let’s check what CRDs are: a CRD is a CustomResourceDefinition (like explained here).
Volume snapshots however are not part of kubernetes’ core API (details see here) and therefore need to be installed explicitly (found details about that here, though they were a little outdated):
linux # kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/refs/heads/master/client/config/crd/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
linux # kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/refs/heads/master/client/config/crd/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
linux # kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/refs/heads/master/client/config/crd/snapshot.storage.k8s.io_volumesnapshots.yaml
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
Now let’s try again:
linux # kubectl apply -f nfs-snapshots-csi.yaml
volumesnapshotclass.snapshot.storage.k8s.io/csi-nfs-snapclass created
Time to see this storage class in action:
linux # kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/csi-driver-nfs/master/deploy/example/deployment.yaml
persistentvolumeclaim/pvc-deployment-nfs created
deployment.apps/deployment-nfs created
This will create a “pvc-<uuid>” directory in your NFS server’s root directory and mount it to /mnt/nfs within your container.
NFS server
As I mentioned above I already had a NFS server in place. That may be not the case for you, so here’s a very basic config:
linux # apt install libnfs-utils nfs-kernel-server
linux # cat /etc/exports
/export 192.168.1.0/24(fsid=root,rw,wdelay,root_squash,no_subtree_check,sec=sys,secure)
/export/kubernetes 192.168.1.0/24(nohide,rw,wdelay,no_root_squash,no_subtree_check,sec=sys,secure)
Keep in mind, that the above config is using NFS v4.x, so there’s one basic export directory (in my case “/export“) and all other exported directories need to be located within it (this may also be done using bind mounts). My kubernetes storage will be located within “/export/kubernetes“. The source path for a NFS v4.x mount does not contain the basic export directory itself!
linux # cat nfs-csi.yaml
<...>
parameters:
server: nfs.mydomain.de
share: /kubernetes
<...>
Option 3: Proxmox
Will be added if time permits 😉
