If you have an external NFS server and want to share that volume in RWX mode (ReadWriteMany), the most basic way is to manually create the persistent volume and persistent volume claim.
In this article, I will show you how to manually create a pv (persistent volume) representing an external NFS, and persistent volume claim (pvc) that can be written to by a deployment with multiple pods.
If you are instead looking for a more advanced dynamic solution where a storageclass is used to create the persistent volume, then read my other article here.
Prerequisite, NFS export
You need to have an external NFS export available. This could be on a dedicated storage appliance, a clustered software solution, or even the local Host system (which is how we will demonstrate here).
To create an NFS export on your Ubuntu host, I’ll pull instructions from the full article I wrote here.
Install OS packages
sudo apt-get update sudo apt-get install nfs-common nfs-kernel-server -y
Create directory to export
sudo mkdir -p /data/nfs1 sudo chown nobody:nogroup /data/nfs1 sudo chmod g+rwxs /data/nfs1
Export Directory
# limit access to clients in 192.168/16 network $ echo -e "/data/nfs1\t192.168.0.0/16(rw,sync,no_subtree_check,no_root_squash)" | sudo tee -a /etc/exports $ sudo exportfs -av /data/nfs1 192.168.0.0/16
Restart NFS service
# restart and show logs sudo systemctl restart nfs-kernel-server sudo systemctl status nfs-kernel-server
Show export details
# show for localhost $ /sbin/showmount -e localhost Export list for 127.0.0.1: /data/nfs1 192.168.0.0/16 # show for default public IP of host $ /sbin/showmount -e 192.168.2.239 Export list for 192.168.2.239: /data/nfs1 192.168.0.0/16
Prerequisite, NFS client package on K8s nodes
The other requirement is that all Kubernetes node have the NFS client packages available. If your K8s worker nodes are based on Ubuntu, this means having the nfs-common package installed on all K8s worker nodes
# on Debian/Ubuntu based nodes sudo apt update sudo apt install nfs-common -y # on RHEL based nodes # sudo yum install nfs-utils -y
As a test, ssh into each K8s worker node and test access to the host NFS export created in the last section.
# use default public IP of host $ /sbin/showmount -e 192.168.2.239 Export list for 192.168.2.239: /data/nfs1 192.168.0.0/16
Create NFS Persistent Volume (pv)
Now you need to create the NFS persistent volume, specifying your specific IP address (nfs.server) and export path (nfs.path).
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv
labels:
name: mynfs
spec:
storageClassName: nfs-manual # same storage class as pvc
capacity:
storage: 100Mi
accessModes:
- ReadWriteMany # pvc must match
nfs:
server: 192.168.2.239 # ip addres of nfs server
path: "/data/nfs1" # path to exported directory (:)
Download the sample from my github project, edit to your environment and apply using kubectl.
wget https://raw.githubusercontent.com/fabianlee/k8s-nfs-static-dynamic/main/static/nfs-persistent-volume.yaml # first replace IP and export path values, then apply vi nfs-persistent-volume.yaml kubectl apply -f nfs-persistent-volume.yaml # display new pv object $ kubectl get pv nvs-pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE nfs-pv 100Mi RWX Retain Available nfs-manual 5s
Create Persistent Volume Claim (pvc)
Now manually create the static persistent volume claim.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pvc
spec:
storageClassName: nfs-manual
accessModes:
- ReadWriteMany # must be the same as PersistentVolume
resources:
requests:
storage: 50Mi
Download the sample from my github project, edit to your environment and apply using kubectl.
wget https://raw.githubusercontent.com/fabianlee/k8s-nfs-static-dynamic/main/static/nfs-persistent-volume-claim.yaml # apply kubectl apply -f nfs-persistent-volume-claim.yaml # display new pvc object $ kubectl get pvc nfs-pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE nfs-pvc Bound nfs-pv 100Mi RWX nfs-manual 6s
Validate NGINX pod with NFS mount
Now let’s create a small NGINX pod that mounts the NFS export in its web directory. Any files created on the NFS share can be retrieved via HTTP.
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nfs-nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
volumes:
- name: nfs-test
persistentVolumeClaim:
claimName: nfs-pvc # name of pvc
containers:
- image: nginx
name: nginx
volumeMounts:
- name: nfs-test # template.spec.volumes[].name
mountPath: /usr/share/nginx/html # mount inside of container
Apply this file which will create an nginx pod that has the NFS mounted at /usr/share/nginx/html.
wget https://raw.githubusercontent.com/fabianlee/k8s-nfs-static-dynamic/main/static/nfs-nginx-test-pod.yaml # apply kubectl apply -f nfs-nginx-test-pod.yaml # check pod status $ kubectl get pods -l=app=nginx NAME READY STATUS RESTARTS AGE nfs-nginx-7df548d986-9lnqh 1/1 Running 0 35s # capture unique pod name $ pod_name=$(kubectl get pods -l=app=nginx --no-headers -o=custom-columns=NAME:.metadata.name) # create html file on share kubectl exec -it $pod_name -- sh -c "echo \"<h1>hello world</h1>\" > /usr/share/nginx/html/hello.html" # pull file using HTTP $ kubectl exec -it $pod_name -- curl http://localhost:80/hello.html <h1>hello world</h1>
And from the NFS host machine outside the pod, you should also be able to see this sample file created.
# from machine hosting NFS share, file can be seen $ cat /data/nfs1/hello.html <h1>hello world</h1>
Validate Deployment with multiple pods writing to NFS mount
The other scenario we want to test is the ReadWriteMany (RWX) aspect, where multiple pods are able to write to the same shared NFS location.
Below is a deployment that spawns 3 replica pods. It uses the Downward API to figure out its node and pod name, and as part of the liveness probe every 20 seconds, appends a log line to the file named “nfs-liveness-exec” on the NFS share.
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
test: nfs-liveness
name: nfs-liveness-exec
spec:
replicas: 3
selector:
matchLabels:
test: nfs-liveness
template:
metadata:
labels:
test: nfs-liveness
spec:
volumes:
# volume for nfs mount
- name: nfs-test
persistentVolumeClaim:
claimName: nfs-pvc # name of pvc
# volume for DownwardAPI pod introspection
- name: podinfo
downwardAPI:
items:
- path: "name"
fieldRef:
fieldPath: metadata.name
containers:
- name: nfs-liveness
image: k8s.gcr.io/busybox
args:
- /bin/sh
- -c
- touch /tmp/healthy; sleep 3000
# periodic liveness probe that reports back node,pod info written to shared NFS
livenessProbe:
exec:
command:
- /bin/sh
- -c
- echo $(date '+%Y-%m-%d %H:%M:%S') ISALIVE node=$MY_NODE_NAME pod=$MY_POD_NAME podip=$MY_POD_IP >> /mnt/nfs1/nfs-liveness-exec
initialDelaySeconds: 5
periodSeconds: 20
volumeMounts:
# mount for shared NFS
- name: nfs-test # template.spec.volumes[].name
mountPath: /mnt/nfs1 # mount inside of container
# mount for Downward API introspection
- name: podinfo
mountPath: /etc/podinfo
# environment variables exposed
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
Once deployed, each of the 3 pods will append a log line to the same “nfs-liveness-exec” file located on the NFS every 20 seconds.
wget https://raw.githubusercontent.com/fabianlee/k8s-nfs-static-dynamic/main/static/nfs-deployment-rwx.yaml # apply kubectl apply -f nfs-deployment-rwx.yaml # view deployment $ kubectl get deployment nfs-liveness-exec NAME READY UP-TO-DATE AVAILABLE AGE nfs-liveness-exec 3/3 3 3 74s # view 3 pods $ kubectl get pod -l=test=nfs-liveness NAME READY STATUS RESTARTS AGE nfs-liveness-exec-5864cdcf9d-srvww 1/1 Running 0 117s nfs-liveness-exec-5864cdcf9d-blhc7 1/1 Running 0 116s nfs-liveness-exec-5864cdcf9d-dbp8c 1/1 Running 0 116s # view single log that each pod is writing to $ kubectl exec -it deploy/nfs-liveness-exec -- tail -n4 /mnt/nfs1/nfs-liveness-exec 2022-01-11 16:06:49 ISALIVE node=k3s-1 pod=nfs-liveness-exec-5864cdcf9d-blhc7 podip=10.42.0.12 2022-01-11 16:06:49 ISALIVE node=k3s-3 pod=nfs-liveness-exec-5864cdcf9d-dbp8c podip=10.42.2.12 2022-01-11 16:07:09 ISALIVE node=k3s-2 pod=nfs-liveness-exec-5864cdcf9d-srvww podip=10.42.1.11
From the NFS host, you will be able to see this same content.
$ tail -n4 /data/nfs1/nfs-liveness-exec 2022-01-11 16:10:49 ISALIVE node=k3s-1 pod=nfs-liveness-exec-5864cdcf9d-blhc7 podip=10.42.0.12 2022-01-11 16:11:09 ISALIVE node=k3s-2 pod=nfs-liveness-exec-5864cdcf9d-srvww podip=10.42.1.11 2022-01-11 16:11:09 ISALIVE node=k3s-3 pod=nfs-liveness-exec-5864cdcf9d-dbp8c podip=10.42.2.12 2022-01-11 16:11:09 ISALIVE node=k3s-1 pod=nfs-liveness-exec-5864cdcf9d-blhc7 podip=10.42.0.12
REFERENCES
stackoverflow, discussion on dynamic and non-dynamic NFS solutions
gist from admun, nfs-client-provisioner broken on rancher “selflink was empty”
ccaplat on Medium, manual NFS volume and claim
k8s doc, feature gates settings for selfLink
forums.rancher, enable features flags for k3
stackoverflow, issue with NFS and selfLink in 1.20+
raymondc.net, reasons why storageclass is needed
NOTES
Example of replacing values with sed and applying in one step
# replace values with your own sed 's/192.168.2.239/A.B.C.D/ ; s#/data/nfs1#/your/path#' nfs-persistent-volume.yaml | kubectl apply -f -