Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1867152

Summary: Manila volumes cannot be unmounted after NFS driver pod restart
Product: OpenShift Container Platform Reporter: Jan Safranek <jsafrane>
Component: StorageAssignee: Jan Safranek <jsafrane>
Storage sub component: OpenStack CSI Drivers QA Contact: Wei Duan <wduan>
Status: CLOSED ERRATA Docs Contact:
Severity: high    
Priority: unspecified CC: aos-bugs, gouthamr, msamymos, tbarron, vimartin, wduan
Version: 4.5   
Target Milestone: ---   
Target Release: 4.6.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Cause: Manila CSI driver used the driver pod IP address to mount NFS volumes. Consequence: After the pod restart the pod got a new IP address and it was not able to unmount volumes mounted by the previous pod. Fix: The pod now uses host network and the host IP address to mount volumes. Result: NFS volumes can be unmounted after the driver pod restart.
Story Points: ---
Clone Of:
: 1878980 (view as bug list) Environment:
Last Closed: 2020-10-27 16:26:34 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1862523, 1878980    

Description Jan Safranek 2020-08-07 13:59:19 UTC
How reproducible:
Always

Steps to Reproduce:
1. Create a Manila NFS PV and use it in a pod
2. Restart NFS driver pods, e.g.:
oc -n openshift-manila-csi-driver delete pod --all

3. Delete the pod created at step 1.

Actual results:
The pod is Terminating forever!

Expected results:
The pod is deleted.

This is caused by the NFS driver using pod's IP address in NFS mount:

$ mount -v|grep csi
172.16.32.1:/volumes/_nogroup/4b2ffc1a-41aa-4b62-8e4c-ad22ef698977 on /var/lib/kubelet/pods/6c9305ff-4f2d-4151-af88-a615317ed519/volumes/kubernetes.io~csi/pvc-0caf4758-7215-4893-8e8e-4554d562a4b9/mount type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.129.2.6,local_lock=none,addr=172.16.32.1)

When the driver pod is restarted, it gets a different IP and it's not able to unmount the volume - /bin/umount hangs and never finishes!

The NFS driver should use "hostNetwork: true", because node IP is unlikely to change.

In addition, the NFS driver must use `umount -f` not to get stuck when unmounting volumes with the wrong `clientaddr`.

Comment 1 Jan Safranek 2020-09-08 08:26:19 UTC
*** Bug 1876219 has been marked as a duplicate of this bug. ***

Comment 4 Mike Fedosin 2020-09-10 13:59:04 UTC
*** Bug 1867143 has been marked as a duplicate of this bug. ***

Comment 5 Wei Duan 2020-09-14 09:05:26 UTC
1. Installing OCP nightly 4.6.0-0.nightly-2020-09-12-230035 on OSP13, checking manila-csi-driver installed successfully. 
2. Creating deployment and pvc with csi-manila-ceph sc, pod is running and pvc in bound.
NAME      STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
mypvc01   Bound    pvc-342269ff-f7de-49a9-8f93-864abb28c0b0   1Gi        RWO            csi-manila-ceph   23m
3. Checking on node that node ip is used(172.16.34.4) 
sh-4.4# mount -v | grep pvc-342269ff-f7de-49a9-8f93-864abb28c0b0
172.16.32.1:/volumes/_nogroup/88833f2c-4cce-4b3e-a927-62b9daa29a32 on /var/lib/kubelet/pods/3ddc5d60-aa01-4cef-b708-7b85b096b631/volumes/kubernetes.io~csi/pvc-342269ff-f7de-49a9-8f93-864abb28c0b0/mount type nfs4 (rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=172.16.34.4,local_lock=none,addr=172.16.32.1)
4. Delete the manila-csi-driver related pod and waiting them back to work
5. Delete the pod, pod was deleted successfully
6. Wait another pod created successfully, and check previous data existed under mount directory.

Mark VERIFIED according to the test result above.

Comment 7 errata-xmlrpc 2020-10-27 16:26:34 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Container Platform 4.6 GA Images), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:4196

Comment 8 W. Trevor King 2021-04-05 17:46:22 UTC
Removing UpgradeBlocker from this older bug, to remove it from the suspect queue described in [1].  If you feel like this bug still needs to be a suspect, please add keyword again.

[1]: https://github.com/openshift/enhancements/pull/475