Bug 1246654
| Summary: | EBS Persistent volumes do not unmount when the pod moves | ||
|---|---|---|---|
| Product: | OKD | Reporter: | Kenny Woodson <kwoodson> |
| Component: | Storage | Assignee: | Mark Turansky <mturansk> |
| Status: | CLOSED NOTABUG | QA Contact: | Liang Xia <lxia> |
| Severity: | high | Docs Contact: | |
| Priority: | high | ||
| Version: | 3.x | CC: | libra-bugs, twiest |
| Target Milestone: | --- | ||
| Target Release: | --- | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Bug Fix | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2015-09-08 15:51:29 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
I tested this today and the behavior is working. I have a pod created using ebs storage. I set the current node of the pod to unschedulable and deleted the pod. It successfully moved to a new node, mounted the drive, and my application continued to work. We have a workaround for the other bug (https://bugzilla.redhat.com/show_bug.cgi?id=1246649). This one seems to be working. Not a bug, as the issue's author states in the last comment. |
Description of problem: When my pod moves from node a to node b the persistent volume stays mounted on node a. # oc get events Fri, 24 Jul 2015 14:40:21 -0400 Fri, 24 Jul 2015 14:40:31 -0400 2 mysql-1-25aej Pod failedMount {kubelet 172.16.13.26} Unable to mount volumes for pod "mysql-1-25aej_monitoring": Error attaching EBS volume: VolumeInUse: vol-33b97ad2 is already attached to an instance It appears the drive is already mounted and did not unmount and follow the pod. Version-Release number of selected component (if applicable): # rpm -qa | grep openshiftopenshift-3.0.1.0-0.git.205.2c9a9b0.el7ose.x86_64 openshift-sdn-ovs-3.0.1.0-0.git.205.2c9a9b0.el7ose.x86_64 openshift-node-3.0.1.0-0.git.205.2c9a9b0.el7ose.x86_64 tuned-profiles-openshift-node-3.0.1.0-0.git.205.2c9a9b0.el7ose.x86_64 openshift-master-3.0.1.0-0.git.205.2c9a9b0.el7ose.x86_64 openshift-3.0.1.0-0.git.205.2c9a9b0.el7ose.x86_64 How reproducible: Very reproducible. Steps to Reproduce: Note: This cluster must have 2 nodes. 1. Create a pv like this: # cat persistent-volume.ebs.kwoodson10g0001.yml apiVersion: v1 kind: PersistentVolume metadata: name: pv-ebs-kwoodson10g0001 labels: type: ebs spec: capacity: storage: 10Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Recycle awsElasticBlockStore: volumeID: aws://us-east-1b/vol-33b97ad2 fsType: ext4 2. Create a pvc. # oc get pvc mysql -o yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysql spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi volumeName: pv-ebs-kwoodson10g0001 3. Create a pod that uses the above pvc. 4. Turn off the nodes where the volume is currently mounted with this command on the openshift-master: # KUBECONFIG=/etc/openshift/master/admin.kubeconfig oadm manage-node <nodename> --schedulable=false 6. Call oc delete pods <pod name>. This should schedule the pod onto the other node. ex: oc delete pods mysql-1-cmw43 7. Verify the error message 'is already attached to an instance' by running this command: # oc get events Fri, 24 Jul 2015 14:40:21 -0400 Fri, 24 Jul 2015 14:40:31 -0400 2 mysql-1-25aej Pod failedMount {kubelet 172.16.13.26} Unable to mount volumes for pod "mysql-1-25aej_monitoring": Error attaching EBS volume: VolumeInUse: vol-33b97ad2 is already attached to an instance Actual results: The pod fails to deploy because the persistent volume is already mounted and cannot be mounted on the other node. Expected results: The pod should be deployed to the other node and the drive should be unmounted from the previous node and mounted on the new node. Additional info: It appears that the volume does not unmount correctly. Detaching the volume from the previous node may or may not be working. Test and verify that this is working.