Bug 1659653 - WaitForAttach failed for VMDK devicePath is empty
Summary: WaitForAttach failed for VMDK devicePath is empty
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Storage
Version: 3.11.0
Hardware: x86_64
OS: Linux
high
high
Target Milestone: ---
: 3.11.z
Assignee: Hemant Kumar
QA Contact: Chao Yang
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-12-14 22:15 UTC by Tom Manor
Modified: 2023-03-24 14:26 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-02-20 14:11:02 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 3851721 0 None None None 2019-01-28 15:10:09 UTC
Red Hat Product Errata RHBA-2019:0326 0 None None None 2019-02-20 14:11:09 UTC

Description Tom Manor 2018-12-14 22:15:17 UTC
Description of problem:

Seeing an issue with StatefulSet where the pod throws an error saying devicePath is empty. This is not impacting all StatefulSets but whenever there is redeployment.


Version-Release number of selected component (if applicable):

OCP v3.11.43


How reproducible:

Issue is repeatable when re-deployments are made


Steps to Reproduce:
1. Start with running 3.11.43 cluster
2. Create a StatefulSet with attached PVC connected to a dynamic provisioned volume
3. Delete StatefulSet


Actual results:

As the StatefulSet recreates:
1. I see that vmdk is attached to the VM.
2. I see lsblk command shows the disk attached on the VM.
3. Pod shows the error as shown below.
--------------------------------------------------
  Type     Reason       Age                From                      Message
  ----     ------       ----               ----                      -------
  Warning  FailedMount  22m (x51 over 2h)  kubelet, customerqa008  Unable to mount volumes for pod "query-master-0_app-qa(7d30a1db-ffa0-11e8-a650-005056baf2f2)": timeout expired waiting for volumes to attach or mount for pod "app-qa"/"search-master-0". list of unmounted volumes=[query-raw-data]. list of unattached volumes=[query-raw-data log-data splunk-config default-datasource]
  Warning  FailedMount  11m (x70 over 2h)  kubelet, suocpedcmpqa008  MountVolume.WaitForAttach failed for volume "pvc-cc116f5f-fee9-11e8-8682-005056baf2f2" : WaitForAttach failed for VMDK "[CUST_VPLEX0146_SG01_D700_local] kubevols/kubernetes-dynamic-pvc-cc116f5f-fee9-11e8-8682-005056baf2f2.vmdk": devicePath is empty.
----------------------------------------------------------


Expected results:

Pod recreates correctly with PVC mounted.

Additional info:

Comment 1 Hemant Kumar 2018-12-15 04:16:10 UTC
Moving to storage.

Comment 9 Chao Yang 2019-01-30 06:31:25 UTC
It is passed on 
Server https://ip-172-18-13-58.ec2.internal:8443
openshift v3.11.75
kubernetes v1.11.0+d4cacc0

1.Create a StatefulSet app
apiVersion: v1
kind: Service
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  ports:
  - port: 80
    name: web
  clusterIP: None
  selector:
    app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: web
spec:
  selector:
    matchLabels:
      app: nginx # has to match .spec.template.metadata.labels
  serviceName: "nginx"
  replicas: 3 # by default is 1
  template:
    metadata:
      labels:
        app: nginx # has to match .spec.selector.matchLabels
    spec:
      terminationGracePeriodSeconds: 10
      containers:
      - name: nginx
        image: aosqe/hello-openshift
        ports:
        - containerPort: 80
          name: web
        volumeMounts:
        - name: www
          mountPath: /usr/share/nginx/html
  volumeClaimTemplates:
  - metadata:
      name: www
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 1Gi
2.After all pods are running, delete the StatefulSet
3.Recreate the StatefulSet
4.Pods are running, no error met

Comment 15 errata-xmlrpc 2019-02-20 14:11:02 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0326


Note You need to log in before you can comment on or make changes to this bug.