Bug 1626683 - Unable to Mount StatefulSet PV in AWS EBS
Summary: Unable to Mount StatefulSet PV in AWS EBS
Keywords:
Status: CLOSED DUPLICATE of bug 1632440
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Storage
Version: 3.9.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 3.9.z
Assignee: Hemant Kumar
QA Contact: Liang Xia
URL:
Whiteboard:
: 1656439 (view as bug list)
Depends On:
Blocks: 1632440
TreeView+ depends on / blocked
 
Reported: 2018-09-07 21:15 UTC by Greg Rodriguez II
Modified: 2020-05-20 19:52 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1632440 (view as bug list)
Environment:
Last Closed: 2019-04-23 11:40:23 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Greg Rodriguez II 2018-09-07 21:15:52 UTC
Description of problem:
Customer using AWS EBS is unable to mount StatefulSet PVs in OCP.  Error reports Timeout waiting for mount paths to be created.  Customer states they can log into the AWS console, and see that the EBS volume is provisioned and attached to the instance. 

Version-Release number of selected component (if applicable):
OCP v3.9.31

How reproducible:
Customer verified

Steps to Reproduce:
apiVersion: v1
kind: Service
metadata:
  name: redis-primary06
  labels:
    app: redis-primary06
spec:
  ports:
  - port: 6379
    name: redis-primary06
  selector:
    app: redis-primary06
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: redis-primary06
spec:
  serviceName: redis-primary06
  replicas: 1
  template:
    metadata:
      labels:
        app: redis-primary06
    spec:
      terminationGracePeriodSeconds: 10
      containers:
      - name: redis-primary06
        image: docker-registry.default.svc:5000/stateful/redis-stateful01:latest
        imagePullPolicy: Always
        ports:
        - containerPort: 6379
          name: redis-primary06
        volumeMounts:
        - name: redis-primary06-volume
          mountPath: /data
  volumeClaimTemplates:
  - metadata:
      name: redis-primary06-volume
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 1Gi

Actual results:
Sep 04 10:32:51 ip-10-244-153-212.us-west-1.compute.internal atomic-openshift-node[3449]: E0904 10:32:51.723461    3449 nestedpendingoperations.go:267] Operation for "\"kubernetes.io/aws-ebs/aws://us-west-1c/vol-0c8071a430837f6d4\"" failed. No retries permitted until 2018-09-04 10:32:52.223433975 -0700 PDT m=+695195.414573422 (durationBeforeRetry 500ms). Error: "MountVolume.WaitForAttach failed for volume \"pvc-09ef0d26-abe6-11e8-b624-06cfff236190\" (UniqueName: \"kubernetes.io/aws-ebs/aws://us-west-1c/vol-0c8071a430837f6d4\") pod \"redis-primary-0\" (UID: \"de0a3822-b066-11e8-8975-06c7af66a254\") : Could not find attached AWS Volume \"aws://us-west-1c/vol-0c8071a430837f6d4\". Timeout waiting for mount paths to be created."
Sep 04 10:32:52 ip-10-244-153-212.us-west-1.compute.internal atomic-openshift-node[3449]: I0904 10:32:52.273439    3449 reconciler.go:262] operationExecutor.MountVolume started for volume "pvc-09ef0d26-abe6-11e8-b624-06cfff236190" (UniqueName: "kubernetes.io/aws-ebs/aws://us-west-1c/vol-0c8071a430837f6d4") pod "redis-primary-0" (UID: "de0a3822-b066-11e8-8975-06c7af66a254")
Sep 04 10:32:52 ip-10-244-153-212.us-west-1.compute.internal atomic-openshift-node[3449]: I0904 10:32:52.273526    3449 operation_generator.go:481] MountVolume.WaitForAttach entering for volume "pvc-09ef0d26-abe6-11e8-b624-06cfff236190" (UniqueName: "kubernetes.io/aws-ebs/aws://us-west-1c/vol-0c8071a430837f6d4") pod "redis-primary-0" (UID: "de0a3822-b066-11e8-8975-06c7af66a254") DevicePath "/dev/xvdcb"

Expected results:
Expected that volume would mount

Master Log:
-sosreport from masters attached

Node Log (of failed PODs):
-Attached

StorageClass Dump (if StorageClass used by PV/PVC):
Default Storage Class ( AWS EBS )
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  annotations:
    storageclass.beta.kubernetes.io/is-default-class: "true"
    storageclass.kubernetes.io/is-default-class: "true"
  creationTimestamp: 2018-07-17T17:28:22Z
  name: gp2
  resourceVersion: "6970653"
  selfLink: /apis/storage.k8s.io/v1/storageclasses/gp2
  uid: ce347ec3-89e6-11e8-b3e6-06c7af66a254
parameters:
  encrypted: "false"
  kmsKeyId: ""
  type: gp2
provisioner: kubernetes.io/aws-ebs
reclaimPolicy: Delete

Additional info:
N/A

Comment 4 Greg Rodriguez II 2018-09-07 21:54:31 UTC
Clarification from Customer: 

The persistent volume mounting issue also occurs with a persistent volume claim, not with just a statefulset PV.

Comment 6 Greg Rodriguez II 2018-09-10 16:54:31 UTC
Customer has requested an increase to severity and urgency of this issue.  Are there any updates that I could provide to the customer at this time?

Comment 18 Greg Rodriguez II 2018-09-12 23:09:02 UTC
Hemant, the customer has verified that the issue is reproducible and has also accepted the option to screen share to troubleshoot further.

The customer also wanted to know if there was a confirmed root cause at this time.

When may you have availability to work with the customer directly?  I will let them know and report back.

Comment 27 Greg Rodriguez II 2018-09-18 19:39:09 UTC
Customer requesting current status on issue.  Please provide any update you are able.  Thank you!

Comment 35 Hemant Kumar 2019-01-04 17:46:23 UTC
*** Bug 1656439 has been marked as a duplicate of this bug. ***

Comment 36 Stephen Cuppett 2019-04-23 11:40:23 UTC
Marking a duplicate of BZ1632440. Can someone verify this is resolved with a kernel containing the fix?

*** This bug has been marked as a duplicate of bug 1632440 ***


Note You need to log in before you can comment on or make changes to this bug.