Bug 1867152 - Manila volumes cannot be unmounted after NFS driver pod restart
Summary: Manila volumes cannot be unmounted after NFS driver pod restart
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Storage
Version: 4.5
Hardware: Unspecified
OS: Unspecified
Target Milestone: ---
: 4.6.0
Assignee: Jan Safranek
QA Contact: Wei Duan
: 1867143 1876219 (view as bug list)
Depends On:
Blocks: 1862523 1878980
TreeView+ depends on / blocked
Reported: 2020-08-07 13:59 UTC by Jan Safranek
Modified: 2020-09-15 06:43 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1878980 (view as bug list)
Last Closed:
Target Upstream Version:

Attachments (Terms of Use)

System ID Priority Status Summary Last Updated
Github openshift csi-driver-nfs pull 31 None closed Bug 1867152: <carry>: Umount volumes with force 2020-09-15 06:42:28 UTC

Description Jan Safranek 2020-08-07 13:59:19 UTC
How reproducible:

Steps to Reproduce:
1. Create a Manila NFS PV and use it in a pod
2. Restart NFS driver pods, e.g.:
oc -n openshift-manila-csi-driver delete pod --all

3. Delete the pod created at step 1.

Actual results:
The pod is Terminating forever!

Expected results:
The pod is deleted.

This is caused by the NFS driver using pod's IP address in NFS mount:

$ mount -v|grep csi on /var/lib/kubelet/pods/6c9305ff-4f2d-4151-af88-a615317ed519/volumes/kubernetes.io~csi/pvc-0caf4758-7215-4893-8e8e-4554d562a4b9/mount type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=,local_lock=none,addr=

When the driver pod is restarted, it gets a different IP and it's not able to unmount the volume - /bin/umount hangs and never finishes!

The NFS driver should use "hostNetwork: true", because node IP is unlikely to change.

In addition, the NFS driver must use `umount -f` not to get stuck when unmounting volumes with the wrong `clientaddr`.

Comment 1 Jan Safranek 2020-09-08 08:26:19 UTC
*** Bug 1876219 has been marked as a duplicate of this bug. ***

Comment 4 Mike Fedosin 2020-09-10 13:59:04 UTC
*** Bug 1867143 has been marked as a duplicate of this bug. ***

Comment 5 Wei Duan 2020-09-14 09:05:26 UTC
1. Installing OCP nightly 4.6.0-0.nightly-2020-09-12-230035 on OSP13, checking manila-csi-driver installed successfully. 
2. Creating deployment and pvc with csi-manila-ceph sc, pod is running and pvc in bound.
NAME      STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
mypvc01   Bound    pvc-342269ff-f7de-49a9-8f93-864abb28c0b0   1Gi        RWO            csi-manila-ceph   23m
3. Checking on node that node ip is used( 
sh-4.4# mount -v | grep pvc-342269ff-f7de-49a9-8f93-864abb28c0b0 on /var/lib/kubelet/pods/3ddc5d60-aa01-4cef-b708-7b85b096b631/volumes/kubernetes.io~csi/pvc-342269ff-f7de-49a9-8f93-864abb28c0b0/mount type nfs4 (rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=,local_lock=none,addr=
4. Delete the manila-csi-driver related pod and waiting them back to work
5. Delete the pod, pod was deleted successfully
6. Wait another pod created successfully, and check previous data existed under mount directory.

Mark VERIFIED according to the test result above.

Note You need to log in before you can comment on or make changes to this bug.