If you have an external NFS export and want to share that with a pod/deployment, you can leverage the nfs-subdir-external-provisioner to create a storageclass that can dynamically create the persistent volume.
In contrast to manually creating the persistent volume and persistent volume claim, this dynamic method cedes the lifecycle of the persistent volume over to the storage class.
So you only need to configure the persistent volume claim and the name of its storage class, leaving the storage class to handle the creation, deletion, and archival of the volume of data.
Prerequisite, NFS export
You need to have an external NFS export available. This could be on a dedicated storage appliance, a clustered software solution, or even the local Host system (which is how we will demonstrate here).
To create a simple NFS export on your Ubuntu host for testing, I’ll pull instructions from the full article I wrote here.
Install OS packages
sudo apt-get update sudo apt-get install nfs-common nfs-kernel-server -y
Create directory to export
sudo mkdir -p /data/nfs1 sudo chown nobody:nogroup /data/nfs1 sudo chmod g+rwxs /data/nfs1
Export Directory
# limit access to clients in 192.168/16 network $ echo -e "/data/nfs1\t192.168.0.0/16(rw,sync,no_subtree_check,no_root_squash)" | sudo tee -a /etc/exports $ sudo exportfs -av /data/nfs1 192.168.0.0/16
Restart NFS service
# restart and show logs sudo systemctl restart nfs-kernel-server sudo systemctl status nfs-kernel-server
Show export details
# show for localhost $ /sbin/showmount -e localhost Export list for 127.0.0.1: /data/nfs1 192.168.0.0/16 # show for default public IP of host $ /sbin/showmount -e 192.168.2.239 Export list for 192.168.2.239: /data/nfs1 192.168.0.0/16
Prerequisite, NFS client package on K8s nodes
The other requirement is that all Kubernetes node have the NFS client packages available. If your K8s worker nodes are based on Ubuntu, this means having the nfs-common package installed on all K8s worker nodes
# on Debian/Ubuntu based nodes sudo apt update sudo apt install nfs-common -y # on RHEL based nodes # sudo yum install nfs-utils -y
As a test, ssh into each K8s worker node and test access to the host NFS export created in the last section.
# use default public IP of host $ /sbin/showmount -e 192.168.2.239 Export list for 192.168.2.239: /data/nfs1 192.168.0.0/16
Prerequisite, install helm3
Coming straight from the official helm3 documentation, you can install helm3 on Debian/Ubuntu using the following commands.
curl https://baltocdn.com/helm/signing.asc | sudo apt-key add - sudo apt-get install apt-transport-https --yes echo "deb https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list sudo apt-get update sudo apt-get install helm
Install helm repo
Assuming you have the KUBECONFIG environment variable pointing at your Kubernetes context, go ahead and add the helm repository for nfs-subdir-external-provisioner.
$ helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner "nfs-subdir-external-provisioner" has been added to your repositories $ helm repo list NAME URL nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner
Install helm chart for NFS
Now install the helm chart with the proper variables. We are setting ‘nfs.server’, ‘nfs.path’ for specifying the NFS server and export path we created in the earlier section.
$ helm install nfs-subdir-external-provisioner \ nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \ --set nfs.server=192.168.2.239 \ --set nfs.path=/data/nfs1 \ --set storageClass.onDelete=true NAME: nfs-subdir-external-provisioner LAST DEPLOYED: Tue Jan 11 16:58:16 2022 NAMESPACE: default STATUS: deployed REVISION: 1 TEST SUITE: None $ kubectl get storageclass nfs-client NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE nfs-client cluster.local/nfs-subdir-external-provisioner Delete Immediate true 7s
This creates the storage class which will be responsible for the creation and lifecycle of persistent volume.
Create Persistent Volume Claim (pvc)
As developers of a pod/deployment, we are still responsible for creating the persistent volume claim (pvc). We just need to make sure we specify the ‘storageClassName’ just created in the last step.
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: sc-nfs-pvc spec: accessModes: - ReadWriteOnce storageClassName: nfs-client resources: requests: storage: 2Gi
Download the sample from my github project, edit to your environment and apply using kubectl.
wget https://raw.githubusercontent.com/fabianlee/k8s-nfs-static-dynamic/main/dynamic/sc-nfs-pvc.yaml # apply kubectl apply -f sc-nfs-pvc.yaml # display new pvc object $ kubectl get pvc sc-nfs-pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE sc-nfs-pvc Bound pvc-cc93ac54-60d2-4be6-9a48-97b2169229db 2Gi RWO nfs-client 15s
Validate NGINX pod with NFS mount
Now let’s create a small NGINX pod that mounts the NFS export in its web directory. Any files created on the NFS share can be retrieved via HTTP.
apiVersion: apps/v1 kind: Deployment metadata: labels: app: sc-nginx name: sc-nfs-nginx spec: replicas: 1 selector: matchLabels: app: sc-nginx template: metadata: labels: app: sc-nginx spec: volumes: - name: nfs-test persistentVolumeClaim: claimName: sc-nfs-pvc containers: - image: nginx name: nginx volumeMounts: - name: nfs-test # template.spec.volumes[].name mountPath: /usr/share/nginx/html # mount inside of container #readOnly: true # if enforcing read-only on volume
Apply this file which will create an nginx pod that has the NFS mounted at /usr/share/nginx/html.
wget https://raw.githubusercontent.com/fabianlee/k8s-nfs-static-dynamic/main/dynamic/sc-nfs-nginx-with-pvc.yaml # apply kubectl apply -f sc-nfs-nginx-with-pvc.yaml # check pod status $ kubectl get pods -l=app=sc-nginx NAME READY STATUS RESTARTS AGE sc-nfs-nginx-564d6d4df6-b8kzx 1/1 Running 0 44m # capture unique pod name $ pod_name=$(kubectl get pods -l=app=sc-nginx --no-headers -o=custom-columns=NAME:.metadata.name) # create html file on share kubectl exec -it $pod_name -- sh -c "echo \"<h1>hello world</h1>\" > /usr/share/nginx/html/hello.html" # pull file using HTTP $ kubectl exec -it $pod_name -- curl http://localhost:80/hello.html <h1>hello world</h1>
And if you check the NFS host, you will see it has created a directory named ‘default-sc-nfs-pvc-<pvc.volumeName>’ to represent this persistent volume. The content just created in the container above can be seen.
# list from host NFS export ls -l /data/nfs1 # show the content of the file just created in the pod above $ cat default-sc-nfs-pvc*/hello.html <h1>hello world</h1>
Deleting this pod and pvc will remove this subdirectory from the NFS host because we installed with helm chart with ‘storageClass.onDelete=delete’.
kubectl delete -f sc-nfs-nginx-with-pvc.yaml
The default is for the storageclass to rename the directory with a prefix of ‘archived-‘ to minimize the risk of data loss.
REFERENCES
kubernetes docs, storage classes for NFS
stackoverflow, discussion on dynamic and non-dynamic NFS solutions
gist from admun, nfs-client-provisioner broken on rancher “selflink was empty”
ccaplat on Medium, manual NFS volume and claim
k8s doc, feature gates settings for selfLink
forums.rancher, enable features flags for k3
stackoverflow, issue with NFS and selfLink in 1.20+
raymondc.net, reasons why storageclass is needed
Arseny Zinchenko, k8s pv and pvc with examples