Steps to Reproduce:
1. Create a Manila NFS PV and use it in a pod
2. Restart NFS driver pods, e.g.:
oc -n openshift-manila-csi-driver delete pod --all
3. Delete the pod created at step 1.
The pod is Terminating forever!
The pod is deleted.
This is caused by the NFS driver using pod's IP address in NFS mount:
$ mount -v|grep csi
172.16.32.1:/volumes/_nogroup/4b2ffc1a-41aa-4b62-8e4c-ad22ef698977 on /var/lib/kubelet/pods/6c9305ff-4f2d-4151-af88-a615317ed519/volumes/kubernetes.io~csi/pvc-0caf4758-7215-4893-8e8e-4554d562a4b9/mount type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.129.2.6,local_lock=none,addr=172.16.32.1)
When the driver pod is restarted, it gets a different IP and it's not able to unmount the volume - /bin/umount hangs and never finishes!
The NFS driver should use "hostNetwork: true", because node IP is unlikely to change.
In addition, the NFS driver must use `umount -f` not to get stuck when unmounting volumes with the wrong `clientaddr`.
*** Bug 1876219 has been marked as a duplicate of this bug. ***
*** Bug 1867143 has been marked as a duplicate of this bug. ***
1. Installing OCP nightly 4.6.0-0.nightly-2020-09-12-230035 on OSP13, checking manila-csi-driver installed successfully.
2. Creating deployment and pvc with csi-manila-ceph sc, pod is running and pvc in bound.
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mypvc01 Bound pvc-342269ff-f7de-49a9-8f93-864abb28c0b0 1Gi RWO csi-manila-ceph 23m
3. Checking on node that node ip is used(172.16.34.4)
sh-4.4# mount -v | grep pvc-342269ff-f7de-49a9-8f93-864abb28c0b0
172.16.32.1:/volumes/_nogroup/88833f2c-4cce-4b3e-a927-62b9daa29a32 on /var/lib/kubelet/pods/3ddc5d60-aa01-4cef-b708-7b85b096b631/volumes/kubernetes.io~csi/pvc-342269ff-f7de-49a9-8f93-864abb28c0b0/mount type nfs4 (rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=172.16.34.4,local_lock=none,addr=172.16.32.1)
4. Delete the manila-csi-driver related pod and waiting them back to work
5. Delete the pod, pod was deleted successfully
6. Wait another pod created successfully, and check previous data existed under mount directory.
Mark VERIFIED according to the test result above.