Bug 1337103 - AWS ebs volume remains in "in-use" status if re-mount failed with different fstype
Summary: AWS ebs volume remains in "in-use" status if re-mount failed with different f...
Keywords:
Status: CLOSED DUPLICATE of bug 1327384
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Storage
Version: 3.2.0
Hardware: Unspecified
OS: Unspecified
low
low
Target Milestone: ---
: ---
Assignee: Bradley Childs
QA Contact: Jianwei Hou
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-05-18 10:04 UTC by Chao Yang
Modified: 2016-05-24 03:39 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-05-24 03:39:06 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Chao Yang 2016-05-18 10:04:30 UTC
Description of problem:
AWS ebs volume remains in "in-use" status if re-mount failed with different fstype

Version-Release number of selected component (if applicable):
openshift v3.2.0.44
kubernetes v1.2.0-36-g4a3f9c5
etcd 2.2.5


How reproducible:
Always

Steps to Reproduce:
1.Create a ebs volume from web console, record volume id
2.Create a pod using this volume, fsType is ext4
apiVersion: v1
kind: Pod
metadata:
  name: aws-web
spec:
  containers:
    - name: web
      image: jhou/hello-openshift
      ports:
        - name: web
          containerPort: 80
          protocol: tcp
      volumeMounts:
        - name: html-volume
          mountPath: "/usr/share/nginx/html"
  volumes:
    - name: html-volume
      awsElasticBlockStore:
        volumeID: aws://us-east-1d/vol-e931934c
        fsType: ext4
        readOnly: false

3.After the pod is running, delete the pod
4.Still using this volume, and fsType is ext3
5.Pod could not running
Mount failed: exit status 32
6.Delete the pod 

Actual results:
The volume is still "in-use" on the web-console

Expected results:
The volume should be "Available" on the web-console

Additional info:
May 18 04:22:44 ip-172-18-6-36 atomic-openshift-node: I0518 04:22:44.711674    6198 aws_util.go:207] Successfully attached EBS Disk "aws://us-east-1d/vol-e931934c".
May 18 04:22:44 ip-172-18-6-36 atomic-openshift-node: I0518 04:22:44.756832    6198 mount_linux.go:105] Mounting /dev/xvdbb /var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/aws-ebs/mounts/aws/us-east-1d/vol-e931934c ext3 [defaults]
May 18 04:22:44 ip-172-18-6-36 kernel: xvdbb: unknown partition table
May 18 04:22:44 ip-172-18-6-36 kernel: EXT4-fs (xvdbb): couldn't mount as ext3 due to feature incompatibilities
May 18 04:22:44 ip-172-18-6-36 atomic-openshift-node: I0518 04:22:44.774455    6198 keymutex.go:58] UnlockKey(...) called for id "aws://us-east-1d/vol-e931934c"
May 18 04:22:44 ip-172-18-6-36 atomic-openshift-node: I0518 04:22:44.774478    6198 keymutex.go:65] UnlockKey(...) for id. Mutex found, trying to unlock it. "aws://us-east-1d/vol-e931934c"
May 18 04:22:44 ip-172-18-6-36 atomic-openshift-node: I0518 04:22:44.774486    6198 keymutex.go:68] UnlockKey(...) for id "aws://us-east-1d/vol-e931934c" completed.
May 18 04:22:44 ip-172-18-6-36 atomic-openshift-node: E0518 04:22:44.774544    6198 kubelet.go:1796] Unable to mount volumes for pod "aws-web_default(614b7b30-1cd1-11e6-8d4a-0e136dbc9083)": Mount failed: exit status 32
May 18 04:22:44 ip-172-18-6-36 atomic-openshift-node: Mounting arguments: /dev/xvdbb /var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/aws-ebs/mounts/aws/us-east-1d/vol-e931934c ext3 [defaults]
May 18 04:22:44 ip-172-18-6-36 atomic-openshift-node: Output: mount: wrong fs type, bad option, bad superblock on /dev/xvdbb,

Comment 1 Bradley Childs 2016-05-24 03:39:06 UTC

*** This bug has been marked as a duplicate of bug 1327384 ***


Note You need to log in before you can comment on or make changes to this bug.