Description of problem: Project running on starter-ca-central-1 After a pod crash, deleting the pod and starting a new pod receives the following error. "Multi-Attach error for volume "pvc-e2c93dee-a401-11e7-9fab-02d8407159d1" Volume is already exclusively attached to one node and can't be attached to another" There are no other pods running at this point and the pod using the PVC is the only one being started. Actual results: Pod fails to start. Expected results: Pod should start normally. Additional info: $ oc get events LASTSEEN FIRSTSEEN COUNT NAME KIND SUBOBJECT TYPE REASON SOURCE MESSAGE 15m 3h 80 mongodb-17-tv0sz Pod Warning FailedMount kubelet, ip-172-31-21-192.ca-central-1.compute.internal Unable to mount volumes for pod "mongodb-17-tv0sz_dentist-inventory(57899858-bee2-11e7-b7cc-02ec8e61afcf)": timeout expired waiting for volumes to attach/mount for pod "dentist-inventory"/"mongodb-17-tv0sz". list of unattached/unmounted volumes=[mongodb-data] 36m 2h 8317 mongodb-17-tv0sz Pod Warning FailedAttachVolume attachdetach Multi-Attach error for volume "pvc-e2c93dee-a401-11e7-9fab-02d8407159d1" Volume is already exclusively attached to one node and can't be attached to another 14m 14m 1 mongodb-17-z4sz3 Pod Normal Scheduled default-scheduler Successfully assigned mongodb-17-z4sz3 to ip-172-31-23-23.ca-central-1.compute.internal 14m 14m 1 mongodb-17-z4sz3 Pod Normal SuccessfulMountVolume kubelet, ip-172-31-23-23.ca-central-1.compute.internal MountVolume.SetUp succeeded for volume "default-token-59tzp" 1m 12m 6 mongodb-17-z4sz3 Pod Warning FailedMount kubelet, ip-172-31-23-23.ca-central-1.compute.internal Unable to mount volumes for pod "mongodb-17-z4sz3_dentist-inventory(ae680037-befb-11e7-b7cc-02ec8e61afcf)": timeout expired waiting for volumes to attach/mount for pod "dentist-inventory"/"mongodb-17-z4sz3". list of unattached/unmounted volumes=[mongodb-data] 14m 14m 1 mongodb-17 ReplicationController Normal SuccessfulDelete replication-controller Deleted pod: mongodb-17-tv0sz 14m 14m 1 mongodb-17 ReplicationController Normal SuccessfulCreate replication-controller Created pod: mongodb-17-z4sz3 14m 14m 1 mongodb DeploymentConfig Normal ReplicationControllerScaled deploymentconfig-controller Scaled replication controller "mongodb-17" from 848073107396 to 0 14m 14m 1 mongodb DeploymentConfig Normal ReplicationControllerScaled deploymentconfig-controller Scaled replication controller "mongodb-17" from 862395782212 to 1 $ oc get pvc -o yaml apiVersion: v1 items: - apiVersion: v1 kind: PersistentVolumeClaim metadata: annotations: pv.kubernetes.io/bind-completed: "yes" pv.kubernetes.io/bound-by-controller: "yes" volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/aws-ebs creationTimestamp: 2017-09-28T04:02:46Z labels: app: mongodb-persistent template: mongodb-persistent-template name: mongodb namespace: dentist-inventory resourceVersion: "51158434" selfLink: /api/v1/namespaces/dentist-inventory/persistentvolumeclaims/mongodb uid: e2c93dee-a401-11e7-9fab-02d8407159d1 spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: ebs volumeName: pvc-e2c93dee-a401-11e7-9fab-02d8407159d1 status: accessModes: - ReadWriteOnce capacity: storage: 1Gi phase: Bound kind: List metadata: {} resourceVersion: "" selfLink: "" Description of problem: Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Master Log: Node Log (of failed PODs): PV Dump: PVC Dump: StorageClass Dump (if StorageClass used by PV/PVC): Additional info: