Description of problem:
Sometimes when pod is moved as a result of drain or something else, the bind volume is unmounted but AWS attached device doesn't get unmounted from the node. This causes problems with pod never starting on another node.
Steps to Reproduce:
1. Create a multinode node cluster and create 5-6 deployments (not pods) on a node. Make sure these pods write to the mounted EBS PV (something like busybox write).
2. Now drain that node, so as all pods running on it moves.
3. Check if all moved pods are running successfully. I had to do this several times to reproduce the bug.
One or more pod can get stuck in ContainerCreating because AWS never attaches device on new node.
All pods should move successfully.
If device pod is using is "busy" (i.e being written to), the first unmount fails with "device is busy error". Eventually though container is deleted and device becomes "unbusy" but error handling code in volumemanager doesn't kick and it deletes the device from actual state of world - thinking device is unmounted. In other words - current code silently swallows unmount error and because error is not propagated volumemanager thinks device is successfully unmounted.
Umount issue is passed on
Create 5 app like below
oc new-app php:5.6~https://github.com/openshift/sti-php --context-dir='5.6/test/test-app'
Create a dynamic pvc
oc volume dc/sti-php --add --type=persistentVolumeClaim --mount-path=/opt1 --name=v1 --claim-name=ebsc2 --overwrite
oadm manage-node ip-172-18-5-95.ec2.internal --evacuate --pod-selector="app=sti-php"
Check ebs volume is umounted from node ip-172-18-5-95.ec2.internal
No error message when grep "device is busy error" /var/log/messages