Description of problem: When using oadm drain command, the first drain seem to work but with a 10 minute delay before starting the two container with pvc here docker-registry and cassandra. And with a pv file system still being mount on the drain node. The second attempt failed with no container being able to mount the pvc volume and still the 2nd pv still mount on the drain node. Version-Release number of selected component (if applicable): 3.5 How reproducible: Always Steps to Reproduce: 1. oadm manage-node drain on a node where there are two pods using PVC backed by Fiber Channel 2. 3. Actual results: the first drain seem to work but with a 10 minute delay before starting the two container with pvc here docker-registry and cassandra. And with a pv file system still being mount on the drain node. The second attempt failed with no container being able to mount the pvc volume and still the 2nd pv still mount on the drain node. Expected results: Drain to work succesfully. Additional info: Workaround is to manually umount the PVs.
Verified on below version, # openshift version openshift v3.5.5.31.34 kubernetes v1.5.2+43a9be4 etcd 3.1.0 When the node is drained, the volume is automatically umounted/detached from the drained node, and attached/mounted to a new node.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:3049