Bug 1395605 - Azure disk: Timeout expired waiting for volume to attach/mount for pod when using a detaching disk
Summary: Azure disk: Timeout expired waiting for volume to attach/mount for pod when u...
Keywords:
Status: CLOSED DUPLICATE of bug 1408398
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Storage
Version: 3.4.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: ---
Assignee: hchen
QA Contact: Wenqi He
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-11-16 09:48 UTC by Wenqi He
Modified: 2017-02-01 15:48 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-02-01 15:48:24 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description Wenqi He 2016-11-16 09:48:53 UTC
Description of problem:
Pod failed to mount with xfs disk while there is pod running with volume of ext4 disk, and the converse is also true.

Version-Release number of selected component (if applicable):
openshift v3.4.0.25+1f36858
kubernetes v1.4.0+776c994

How reproducible:
Always

Steps to Reproduce:
1.Create a pod mounting xfs disk
2.Create another pod mounting ext4 disk 
3.The ext4 pod get failed to mount
4.Delete all pods and create a new pod that use ext4 disk

Actual results:
When the xfs pod is runing, the error message like below:
$ oc describe pod xfs
Events:
  FirstSeen	LastSeen	Count	From						SubobjectPath	Type		Reason		Message
  ---------	--------	-----	----						-------------	--------	------		-------
  7m		7m		1	{default-scheduler }						Normal		Scheduled	Successfully assigned azrarw to wehe-node-1.eastus.cloudapp.azure.com
  7m		59s		4	{kubelet wehe-node-1.eastus.cloudapp.azure.com}			Warning		FailedMount	MountVolume.MountDevice failed for volume "kubernetes.io/azure-disk/test1.vhd" (spec.Name: "azure") pod "063fee27-abd8-11e6-91b6-000d3a15eb14" (UID: "063fee27-abd8-11e6-91b6-000d3a15eb14") with: mount failed: exit status 32
Mounting arguments: /dev/sdc /var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/azure-disk/mounts/test1.vhd ext4 [defaults]
Output: mount: /dev/sdc is already mounted or /var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/azure-disk/mounts/test1.vhd busy
       /dev/sdc is already mounted on /var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/azure-disk/mounts/xfs.vhd
       /dev/sdc is already mounted on /var/lib/origin/openshift.local.volumes/pods/4d4d8cb4-abd7-11e6-91b6-000d3a15eb14/volumes/kubernetes.io~azure-disk/azure


After deleted the xfs pod and project, create a new project and a ext4 pod, got below error:
$ oc describe pod

Events:
  FirstSeen	LastSeen	Count	From						SubobjectPath	Type		Reason		Message
  ---------	--------	-----	----						-------------	--------	------		-------
  2m		2m		1	{default-scheduler }						Normal		Scheduled	Successfully assigned azrwro to wehe-node-1.eastus.cloudapp.azure.com
  1m		1m		1	{kubelet wehe-node-1.eastus.cloudapp.azure.com}			Warning		FailedMount	MountVolume.MountDevice failed for volume "kubernetes.io/azure-disk/test1.vhd" (spec.Name: "azure") pod "5d43cc96-abd4-11e6-91b6-000d3a15eb14" (UID: "5d43cc96-abd4-11e6-91b6-000d3a15eb14") with: mount failed: exit status 32
Mounting arguments: /dev/sdc /var/lib/origin/openshift.local.volumes/plugins/kubernetes.io/azure-disk/mounts/test1.vhd ext4 [defaults]
Output: mount: wrong fs type, bad option, bad superblock on /dev/sdc,
       missing codepage or helper program, or other error

       In some cases useful info is found in syslog - try
       dmesg | tail or so.


  8s	8s	1	{kubelet wehe-node-1.eastus.cloudapp.azure.com}		Warning	FailedMount	Unable to mount volumes for pod "azrwro_8e9wk(5d43cc96-abd4-11e6-91b6-000d3a15eb14)": timeout expired waiting for volumes to attach/mount for pod "azrwro"/"8e9wk". list of unattached/unmounted volumes=[azure]
  8s	8s	1	{kubelet wehe-node-1.eastus.cloudapp.azure.com}		Warning	FailedSync	Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "azrwro"/"8e9wk". list of unattached/unmounted volumes=[azure]


Expected results:
xfs pods and ext4 pods can be running together

Additional info:

Comment 19 hchen 2016-11-23 02:13:05 UTC
I figure out your node VM is D2. You can attach at most 4 data disks.

Comment 22 hchen 2017-02-01 15:48:24 UTC

*** This bug has been marked as a duplicate of bug 1408398 ***


Note You need to log in before you can comment on or make changes to this bug.