Hide Forgot
Description of problem: Pod failed to mount with xfs disk while there is pod running with volume of ext4 disk, and the converse is also true. Version-Release number of selected component (if applicable): openshift v3.4.0.25+1f36858 kubernetes v1.4.0+776c994 How reproducible: Always Steps to Reproduce: 1.Create a pod mounting xfs disk 2.Create another pod mounting ext4 disk 3.The ext4 pod get failed to mount 4.Delete all pods and create a new pod that use ext4 disk Actual results: When the xfs pod is runing, the error message like below: $ oc describe pod xfs Events: FirstSeen LastSeen Count From SubobjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 7m 7m 1 {default-scheduler } Normal Scheduled Successfully assigned azrarw to wehe-node-1.eastus.cloudapp.azure.com 7m 59s 4 {kubelet wehe-node-1.eastus.cloudapp.azure.com} Warning FailedMount MountVolume.MountDevice failed for volume "kubernetes.io/azure-disk/test1.vhd" (spec.Name: "azure") pod "063fee27-abd8-11e6-91b6-000d3a15eb14" (UID: "063fee27-abd8-11e6-91b6-000d3a15eb14") with: mount failed: exit status 32 Mounting arguments: /dev/sdc /var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/azure-disk/mounts/test1.vhd ext4 [defaults] Output: mount: /dev/sdc is already mounted or /var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/azure-disk/mounts/test1.vhd busy /dev/sdc is already mounted on /var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/azure-disk/mounts/xfs.vhd /dev/sdc is already mounted on /var/lib/origin/openshift.local.volumes/pods/4d4d8cb4-abd7-11e6-91b6-000d3a15eb14/volumes/kubernetes.io~azure-disk/azure After deleted the xfs pod and project, create a new project and a ext4 pod, got below error: $ oc describe pod Events: FirstSeen LastSeen Count From SubobjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 2m 2m 1 {default-scheduler } Normal Scheduled Successfully assigned azrwro to wehe-node-1.eastus.cloudapp.azure.com 1m 1m 1 {kubelet wehe-node-1.eastus.cloudapp.azure.com} Warning FailedMount MountVolume.MountDevice failed for volume "kubernetes.io/azure-disk/test1.vhd" (spec.Name: "azure") pod "5d43cc96-abd4-11e6-91b6-000d3a15eb14" (UID: "5d43cc96-abd4-11e6-91b6-000d3a15eb14") with: mount failed: exit status 32 Mounting arguments: /dev/sdc /var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/azure-disk/mounts/test1.vhd ext4 [defaults] Output: mount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so. 8s 8s 1 {kubelet wehe-node-1.eastus.cloudapp.azure.com} Warning FailedMount Unable to mount volumes for pod "azrwro_8e9wk(5d43cc96-abd4-11e6-91b6-000d3a15eb14)": timeout expired waiting for volumes to attach/mount for pod "azrwro"/"8e9wk". list of unattached/unmounted volumes=[azure] 8s 8s 1 {kubelet wehe-node-1.eastus.cloudapp.azure.com} Warning FailedSync Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "azrwro"/"8e9wk". list of unattached/unmounted volumes=[azure] Expected results: xfs pods and ext4 pods can be running together Additional info:
I figure out your node VM is D2. You can attach at most 4 data disks.
*** This bug has been marked as a duplicate of bug 1408398 ***