Bug 1776773 - [Vsphere][4.3] Volume cannot mount to node after upgrade and for from scratch case
Summary: [Vsphere][4.3] Volume cannot mount to node after upgrade and for from scratch...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Machine Config Operator
Version: 4.3.0
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: ---
: 4.3.0
Assignee: Erica von Buelow
QA Contact: Michael Nguyen
URL:
Whiteboard:
: 1775685 1777195 (view as bug list)
Depends On: 1777082
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-11-26 11:01 UTC by Wei Duan
Modified: 2020-01-23 11:14 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1777082 (view as bug list)
Environment:
Last Closed: 2020-01-23 11:14:34 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Vsphere snapshot (126.05 KB, image/png)
2019-11-26 11:01 UTC, Wei Duan
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Github openshift machine-config-operator pull 1293 0 'None' closed Bug 1776773: set vsphere as provider 2021-01-31 08:57:01 UTC
Red Hat Product Errata RHBA-2020:0062 0 None None None 2020-01-23 11:14:48 UTC

Description Wei Duan 2019-11-26 11:01:49 UTC
Created attachment 1639770 [details]
Vsphere snapshot

Description of problem:
After upgrade from 4.2 to 4.3 nightly build, PV cannot mount to node. It reports volume does not exist but it can be found in vmware portal.
Also new create pod with pvc/pv doesn't work.   

Version-Release number of selected component (if applicable):
upgrade from 4.2 to 4.3.0-0.nightly-2019-11-25-153929

How reproducible:

Sometimes during upgrade

Steps to Reproduce:
1. Pod with pvc/pv created and work fine before upgrade.
 
2. upgrade from 4.2 to 4.3 nightly build

3. previous pod with pvc/pv doesn't work anymore.

$ oc describe pod elasticsearch-cdm-1fnn2uw8-3-6669dc877d-zfdfk
Name:               elasticsearch-cdm-1fnn2uw8-3-6669dc877d-zfdfk
Namespace:          openshift-logging
Priority:           0
PriorityClassName:  <none>
Node:               compute-3/139.178.76.25
Start Time:         Tue, 26 Nov 2019 16:04:23 +0800
Labels:             cluster-name=elasticsearch
                    component=elasticsearch
                    es-node-client=true
                    es-node-data=true
                    es-node-master=true
                    node-name=elasticsearch-cdm-1fnn2uw8-3
                    pod-template-hash=6669dc877d
                    tuned.openshift.io/elasticsearch=true
Annotations:        openshift.io/scc: restricted
Status:             Pending
IP:                 
Controlled By:      ReplicaSet/elasticsearch-cdm-1fnn2uw8-3-6669dc877d
Containers:
  elasticsearch:
    Container ID:   
    Image:          image-registry.openshift-image-registry.svc:5000/openshift/ose-logging-elasticsearch5@sha256:0029f94d663d374a4f86c318bc1fb483a0958e22ae09f7723b2c20aab5d6c41c
    Image ID:       
    Ports:          9300/TCP, 9200/TCP
    Host Ports:     0/TCP, 0/TCP
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Limits:
      memory:  2Gi
    Requests:
      cpu:      200m
      memory:   2Gi
    Readiness:  exec [/usr/share/elasticsearch/probe/readiness.sh] delay=10s timeout=30s period=5s #success=1 #failure=3
    Environment:
      DC_NAME:                  elasticsearch-cdm-1fnn2uw8-3
      NAMESPACE:                openshift-logging (v1:metadata.namespace)
      KUBERNETES_TRUST_CERT:    true
      SERVICE_DNS:              elasticsearch-cluster
      CLUSTER_NAME:             elasticsearch
      INSTANCE_RAM:             2Gi
      HEAP_DUMP_LOCATION:       /elasticsearch/persistent/heapdump.hprof
      RECOVER_AFTER_TIME:       5m
      READINESS_PROBE_TIMEOUT:  30
      POD_LABEL:                cluster=elasticsearch
      IS_MASTER:                true
      HAS_DATA:                 true
    Mounts:
      /elasticsearch/persistent from elasticsearch-storage (rw)
      /etc/openshift/elasticsearch/secret from certificates (rw)
      /usr/share/java/elasticsearch/config from elasticsearch-config (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from elasticsearch-token-mkwf9 (ro)
  proxy:
    Container ID:  
    Image:         image-registry.openshift-image-registry.svc:5000/openshift/ose-oauth-proxy@sha256:741ad9d77dd96da36a1b1df51747eff477eb33889391aacd8ca82445022afcc4
    Image ID:      
    Port:          60000/TCP
    Host Port:     0/TCP
    Args:
      --https-address=:60000
      --provider=openshift
      --upstream=https://127.0.0.1:9200
      --tls-cert=/etc/proxy/secrets/tls.crt
      --tls-key=/etc/proxy/secrets/tls.key
      --upstream-ca=/etc/proxy/elasticsearch/admin-ca
      --openshift-service-account=elasticsearch
      -openshift-sar={"resource": "namespaces", "verb": "get"}
      -openshift-delegate-urls={"/": {"resource": "namespaces", "verb": "get"}}
      --pass-user-bearer-token
      --cookie-secret=buqCSQ4QJymiNNmSz1/Yug==
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Limits:
      memory:  64Mi
    Requests:
      cpu:        100m
      memory:     64Mi
    Environment:  <none>
    Mounts:
      /etc/proxy/elasticsearch from certificates (rw)
      /etc/proxy/secrets from elasticsearch-metrics (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from elasticsearch-token-mkwf9 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  elasticsearch-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      elasticsearch
    Optional:  false
  elasticsearch-storage:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  elasticsearch-elasticsearch-cdm-1fnn2uw8-3
    ReadOnly:   false
  certificates:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  elasticsearch
    Optional:    false
  elasticsearch-metrics:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  elasticsearch-metrics
    Optional:    false
  elasticsearch-token-mkwf9:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  elasticsearch-token-mkwf9
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  kubernetes.io/os=linux
Tolerations:     node.kubernetes.io/disk-pressure:NoSchedule
                 node.kubernetes.io/memory-pressure:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age        From                Message
  ----     ------            ----       ----                -------
  Warning  FailedScheduling  <unknown>  default-scheduler   0/7 nodes are available: 1 Insufficient cpu, 2 node(s) were unschedulable, 5 Insufficient memory.
  Warning  FailedScheduling  <unknown>  default-scheduler   0/7 nodes are available: 1 Insufficient cpu, 2 node(s) were unschedulable, 5 Insufficient memory.
  Warning  FailedScheduling  <unknown>  default-scheduler   0/7 nodes are available: 1 Insufficient cpu, 1 node(s) had taints that the pod didn't tolerate, 1 node(s) were unschedulable, 5 Insufficient memory.
  Warning  FailedScheduling  <unknown>  default-scheduler   0/7 nodes are available: 1 Insufficient cpu, 2 node(s) had taints that the pod didn't tolerate, 5 Insufficient memory.
  Normal   Scheduled         <unknown>  default-scheduler   Successfully assigned openshift-logging/elasticsearch-cdm-1fnn2uw8-3-6669dc877d-zfdfk to compute-3
  Warning  FailedMount       121m       kubelet, compute-3  MountVolume.SetUp failed for volume "pvc-1aec1a38-101c-11ea-91dc-0050568b94af" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/1bc67be5-5c6f-44e7-ae96-73bc93fc198c/volumes/kubernetes.io~vsphere-volume/pvc-1aec1a38-101c-11ea-91dc-0050568b94af --scope -- mount -o bind /var/lib/kubelet/plugins/kubernetes.io/vsphere-volume/mounts/[nvme-ds1] kubevols/qe-minmli-428-xzwsj-dynamic-pvc-1aec1a38-101c-11ea-91dc-0050568b94af.vmdk /var/lib/kubelet/pods/1bc67be5-5c6f-44e7-ae96-73bc93fc198c/volumes/kubernetes.io~vsphere-volume/pvc-1aec1a38-101c-11ea-91dc-0050568b94af
Output: Running scope as unit: run-r79fa1f4acbdb4cc493872caf3a1673c4.scope
mount: /var/lib/kubelet/pods/1bc67be5-5c6f-44e7-ae96-73bc93fc198c/volumes/kubernetes.io~vsphere-volume/pvc-1aec1a38-101c-11ea-91dc-0050568b94af: special device /var/lib/kubelet/plugins/kubernetes.io/vsphere-volume/mounts/[nvme-ds1] kubevols/qe-minmli-428-xzwsj-dynamic-pvc-1aec1a38-101c-11ea-91dc-0050568b94af.vmdk does not exist.
  Normal   SuccessfulAttachVolume  121m  attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-1aec1a38-101c-11ea-91dc-0050568b94af"
  Warning  FailedMount             121m  kubelet, compute-3       MountVolume.SetUp failed for volume "pvc-1aec1a38-101c-11ea-91dc-0050568b94af" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/1bc67be5-5c6f-44e7-ae96-73bc93fc198c/volumes/kubernetes.io~vsphere-volume/pvc-1aec1a38-101c-11ea-91dc-0050568b94af --scope -- mount -o bind /var/lib/kubelet/plugins/kubernetes.io/vsphere-volume/mounts/[nvme-ds1] kubevols/qe-minmli-428-xzwsj-dynamic-pvc-1aec1a38-101c-11ea-91dc-0050568b94af.vmdk /var/lib/kubelet/pods/1bc67be5-5c6f-44e7-ae96-73bc93fc198c/volumes/kubernetes.io~vsphere-volume/pvc-1aec1a38-101c-11ea-91dc-0050568b94af
Output: Running scope as unit: run-r1a2a31af30874d3ea2205a9bba48d391.scope
mount: /var/lib/kubelet/pods/1bc67be5-5c6f-44e7-ae96-73bc93fc198c/volumes/kubernetes.io~vsphere-volume/pvc-1aec1a38-101c-11ea-91dc-0050568b94af: special device /var/lib/kubelet/plugins/kubernetes.io/vsphere-volume/mounts/[nvme-ds1] kubevols/qe-minmli-428-xzwsj-dynamic-pvc-1aec1a38-101c-11ea-91dc-0050568b94af.vmdk does not exist.
  Warning  FailedMount  121m  kubelet, compute-3  MountVolume.SetUp failed for volume "pvc-1aec1a38-101c-11ea-91dc-0050568b94af" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/1bc67be5-5c6f-44e7-ae96-73bc93fc198c/volumes/kubernetes.io~vsphere-volume/pvc-1aec1a38-101c-11ea-91dc-0050568b94af --scope -- mount -o bind /var/lib/kubelet/plugins/kubernetes.io/vsphere-volume/mounts/[nvme-ds1] kubevols/qe-minmli-428-xzwsj-dynamic-pvc-1aec1a38-101c-11ea-91dc-0050568b94af.vmdk /var/lib/kubelet/pods/1bc67be5-5c6f-44e7-ae96-73bc93fc198c/volumes/kubernetes.io~vsphere-volume/pvc-1aec1a38-101c-11ea-91dc-0050568b94af
Output: Running scope as unit: run-r5b09f2fc63f64c3c9cf3be67ae31ead0.scope
mount: /var/lib/kubelet/pods/1bc67be5-5c6f-44e7-ae96-73bc93fc198c/volumes/kubernetes.io~vsphere-volume/pvc-1aec1a38-101c-11ea-91dc-0050568b94af: special device /var/lib/kubelet/plugins/kubernetes.io/vsphere-volume/mounts/[nvme-ds1] kubevols/qe-minmli-428-xzwsj-dynamic-pvc-1aec1a38-101c-11ea-91dc-0050568b94af.vmdk does not exist.
  Warning  FailedMount  121m  kubelet, compute-3  MountVolume.SetUp failed for volume "pvc-1aec1a38-101c-11ea-91dc-0050568b94af" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/1bc67be5-5c6f-44e7-ae96-73bc93fc198c/volumes/kubernetes.io~vsphere-volume/pvc-1aec1a38-101c-11ea-91dc-0050568b94af --scope -- mount -o bind /var/lib/kubelet/plugins/kubernetes.io/vsphere-volume/mounts/[nvme-ds1] kubevols/qe-minmli-428-xzwsj-dynamic-pvc-1aec1a38-101c-11ea-91dc-0050568b94af.vmdk /var/lib/kubelet/pods/1bc67be5-5c6f-44e7-ae96-73bc93fc198c/volumes/kubernetes.io~vsphere-volume/pvc-1aec1a38-101c-11ea-91dc-0050568b94af
Output: Running scope as unit: run-r16b7800cbcb040938e1e71a25536ccf4.scope
mount: /var/lib/kubelet/pods/1bc67be5-5c6f-44e7-ae96-73bc93fc198c/volumes/kubernetes.io~vsphere-volume/pvc-1aec1a38-101c-11ea-91dc-0050568b94af: special device /var/lib/kubelet/plugins/kubernetes.io/vsphere-volume/mounts/[nvme-ds1] kubevols/qe-minmli-428-xzwsj-dynamic-pvc-1aec1a38-101c-11ea-91dc-0050568b94af.vmdk does not exist.
  Warning  FailedMount  121m  kubelet, compute-3  MountVolume.SetUp failed for volume "pvc-1aec1a38-101c-11ea-91dc-0050568b94af" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/1bc67be5-5c6f-44e7-ae96-73bc93fc198c/volumes/kubernetes.io~vsphere-volume/pvc-1aec1a38-101c-11ea-91dc-0050568b94af --scope -- mount -o bind /var/lib/kubelet/plugins/kubernetes.io/vsphere-volume/mounts/[nvme-ds1] kubevols/qe-minmli-428-xzwsj-dynamic-pvc-1aec1a38-101c-11ea-91dc-0050568b94af.vmdk /var/lib/kubelet/pods/1bc67be5-5c6f-44e7-ae96-73bc93fc198c/volumes/kubernetes.io~vsphere-volume/pvc-1aec1a38-101c-11ea-91dc-0050568b94af
Output: Running scope as unit: run-rc5d0aceaf4484676a1adfab75e5f7ab5.scope
mount: /var/lib/kubelet/pods/1bc67be5-5c6f-44e7-ae96-73bc93fc198c/volumes/kubernetes.io~vsphere-volume/pvc-1aec1a38-101c-11ea-91dc-0050568b94af: special device /var/lib/kubelet/plugins/kubernetes.io/vsphere-volume/mounts/[nvme-ds1] kubevols/qe-minmli-428-xzwsj-dynamic-pvc-1aec1a38-101c-11ea-91dc-0050568b94af.vmdk does not exist.
  Warning  FailedMount  121m  kubelet, compute-3  MountVolume.SetUp failed for volume "pvc-1aec1a38-101c-11ea-91dc-0050568b94af" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/1bc67be5-5c6f-44e7-ae96-73bc93fc198c/volumes/kubernetes.io~vsphere-volume/pvc-1aec1a38-101c-11ea-91dc-0050568b94af --scope -- mount -o bind /var/lib/kubelet/plugins/kubernetes.io/vsphere-volume/mounts/[nvme-ds1] kubevols/qe-minmli-428-xzwsj-dynamic-pvc-1aec1a38-101c-11ea-91dc-0050568b94af.vmdk /var/lib/kubelet/pods/1bc67be5-5c6f-44e7-ae96-73bc93fc198c/volumes/kubernetes.io~vsphere-volume/pvc-1aec1a38-101c-11ea-91dc-0050568b94af
Output: Running scope as unit: run-r1708b677305f44939f06ad327e6f89a9.scope
mount: /var/lib/kubelet/pods/1bc67be5-5c6f-44e7-ae96-73bc93fc198c/volumes/kubernetes.io~vsphere-volume/pvc-1aec1a38-101c-11ea-91dc-0050568b94af: special device /var/lib/kubelet/plugins/kubernetes.io/vsphere-volume/mounts/[nvme-ds1] kubevols/qe-minmli-428-xzwsj-dynamic-pvc-1aec1a38-101c-11ea-91dc-0050568b94af.vmdk does not exist.
  Warning  FailedMount  120m  kubelet, compute-3  MountVolume.SetUp failed for volume "pvc-1aec1a38-101c-11ea-91dc-0050568b94af" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/1bc67be5-5c6f-44e7-ae96-73bc93fc198c/volumes/kubernetes.io~vsphere-volume/pvc-1aec1a38-101c-11ea-91dc-0050568b94af --scope -- mount -o bind /var/lib/kubelet/plugins/kubernetes.io/vsphere-volume/mounts/[nvme-ds1] kubevols/qe-minmli-428-xzwsj-dynamic-pvc-1aec1a38-101c-11ea-91dc-0050568b94af.vmdk /var/lib/kubelet/pods/1bc67be5-5c6f-44e7-ae96-73bc93fc198c/volumes/kubernetes.io~vsphere-volume/pvc-1aec1a38-101c-11ea-91dc-0050568b94af
Output: Running scope as unit: run-r99ecdf02405643e2b3c1b5bd7746db01.scope
mount: /var/lib/kubelet/pods/1bc67be5-5c6f-44e7-ae96-73bc93fc198c/volumes/kubernetes.io~vsphere-volume/pvc-1aec1a38-101c-11ea-91dc-0050568b94af: special device /var/lib/kubelet/plugins/kubernetes.io/vsphere-volume/mounts/[nvme-ds1] kubevols/qe-minmli-428-xzwsj-dynamic-pvc-1aec1a38-101c-11ea-91dc-0050568b94af.vmdk does not exist.
  Warning  FailedMount  120m  kubelet, compute-3  MountVolume.SetUp failed for volume "pvc-1aec1a38-101c-11ea-91dc-0050568b94af" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/1bc67be5-5c6f-44e7-ae96-73bc93fc198c/volumes/kubernetes.io~vsphere-volume/pvc-1aec1a38-101c-11ea-91dc-0050568b94af --scope -- mount -o bind /var/lib/kubelet/plugins/kubernetes.io/vsphere-volume/mounts/[nvme-ds1] kubevols/qe-minmli-428-xzwsj-dynamic-pvc-1aec1a38-101c-11ea-91dc-0050568b94af.vmdk /var/lib/kubelet/pods/1bc67be5-5c6f-44e7-ae96-73bc93fc198c/volumes/kubernetes.io~vsphere-volume/pvc-1aec1a38-101c-11ea-91dc-0050568b94af
Output: Running scope as unit: run-rfc5058769d7d43709106f5e1aa665a73.scope
mount: /var/lib/kubelet/pods/1bc67be5-5c6f-44e7-ae96-73bc93fc198c/volumes/kubernetes.io~vsphere-volume/pvc-1aec1a38-101c-11ea-91dc-0050568b94af: special device /var/lib/kubelet/plugins/kubernetes.io/vsphere-volume/mounts/[nvme-ds1] kubevols/qe-minmli-428-xzwsj-dynamic-pvc-1aec1a38-101c-11ea-91dc-0050568b94af.vmdk does not exist.
  Warning  FailedMount  101m                 kubelet, compute-3  Unable to attach or mount volumes: unmounted volumes=[elasticsearch-storage], unattached volumes=[elasticsearch-token-mkwf9 elasticsearch-metrics elasticsearch-storage elasticsearch-config certificates]: timed out waiting for the condition
  Warning  FailedMount  76m (x3 over 119m)   kubelet, compute-3  Unable to attach or mount volumes: unmounted volumes=[elasticsearch-storage], unattached volumes=[elasticsearch-config certificates elasticsearch-token-mkwf9 elasticsearch-metrics elasticsearch-storage]: timed out waiting for the condition
  Warning  FailedMount  30m (x3 over 55m)    kubelet, compute-3  Unable to attach or mount volumes: unmounted volumes=[elasticsearch-storage], unattached volumes=[elasticsearch-storage elasticsearch-config certificates elasticsearch-token-mkwf9 elasticsearch-metrics]: timed out waiting for the condition
  Warning  FailedMount  68s (x87 over 119m)  kubelet, compute-3  (combined from similar events): MountVolume.SetUp failed for volume "pvc-1aec1a38-101c-11ea-91dc-0050568b94af" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/1bc67be5-5c6f-44e7-ae96-73bc93fc198c/volumes/kubernetes.io~vsphere-volume/pvc-1aec1a38-101c-11ea-91dc-0050568b94af --scope -- mount -o bind /var/lib/kubelet/plugins/kubernetes.io/vsphere-volume/mounts/[nvme-ds1] kubevols/qe-minmli-428-xzwsj-dynamic-pvc-1aec1a38-101c-11ea-91dc-0050568b94af.vmdk /var/lib/kubelet/pods/1bc67be5-5c6f-44e7-ae96-73bc93fc198c/volumes/kubernetes.io~vsphere-volume/pvc-1aec1a38-101c-11ea-91dc-0050568b94af
Output: Running scope as unit: run-r2b3d6075f1b241e0bd87b236011df39e.scope
mount: /var/lib/kubelet/pods/1bc67be5-5c6f-44e7-ae96-73bc93fc198c/volumes/kubernetes.io~vsphere-volume/pvc-1aec1a38-101c-11ea-91dc-0050568b94af: special device /var/lib/kubelet/plugins/kubernetes.io/vsphere-volume/mounts/[nvme-ds1] kubevols/qe-minmli-428-xzwsj-dynamic-pvc-1aec1a38-101c-11ea-91dc-0050568b94af.vmdk does not exist.

Actual results:
Pod doesn't work after upgrade.

Expected results:
Pod should work after upgrade.


Master Log:

Node Log (of failed PODs):

PV Dump:
$ oc describe pvc elasticsearch-elasticsearch-cdm-1fnn2uw8-1
Name:          elasticsearch-elasticsearch-cdm-1fnn2uw8-1
Namespace:     openshift-logging
StorageClass:  thin
Status:        Bound
Volume:        pvc-1ae9daea-101c-11ea-91dc-0050568b94af
Labels:        <none>
Annotations:   pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
               volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/vsphere-volume
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      9537Mi
Access Modes:  RWO
VolumeMode:    Filesystem
Events:
  Type       Reason                 Age    From                         Message
  ----       ------                 ----   ----                         -------
  Normal     ProvisioningSucceeded  3h35m  persistentvolume-controller  Successfully provisioned volume pvc-1ae9daea-101c-11ea-91dc-0050568b94af using kubernetes.io/vsphere-volume
Mounted By:  elasticsearch-cdm-1fnn2uw8-1-65957d5b44-m6tr2
[wduan@dhcp-140-40 01_general]$ oc describe pv pvc-1ae9daea-101c-11ea-91dc-0050568b94af
Name:            pvc-1ae9daea-101c-11ea-91dc-0050568b94af
Labels:          <none>
Annotations:     kubernetes.io/createdby: vsphere-volume-dynamic-provisioner
                 pv.kubernetes.io/bound-by-controller: yes
                 pv.kubernetes.io/provisioned-by: kubernetes.io/vsphere-volume
Finalizers:      [kubernetes.io/pv-protection]
StorageClass:    thin
Status:          Bound
Claim:           openshift-logging/elasticsearch-elasticsearch-cdm-1fnn2uw8-1
Reclaim Policy:  Delete
Access Modes:    RWO
VolumeMode:      Filesystem
Capacity:        9537Mi
Node Affinity:   <none>
Message:         
Source:
    Type:               vSphereVolume (a Persistent Disk resource in vSphere)
    VolumePath:         [nvme-ds1] kubevols/qe-minmli-428-xzwsj-dynamic-pvc-1ae9daea-101c-11ea-91dc-0050568b94af.vmdk
    FSType:             ext4
    StoragePolicyName:  
Events:                 <none>


PVC Dump:


StorageClass Dump (if StorageClass used by PV/PVC):
$ oc describe sc thin 
Name:                  thin
IsDefaultClass:        Yes
Annotations:           storageclass.kubernetes.io/is-default-class=true
Provisioner:           kubernetes.io/vsphere-volume
Parameters:            diskformat=thin
AllowVolumeExpansion:  <unset>
MountOptions:          <none>
ReclaimPolicy:         Delete
VolumeBindingMode:     Immediate
Events:                <none>
[wduan@dhcp-140-40 01_general]$ oc get sc thin -oyaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
  creationTimestamp: "2019-11-26T06:24:13Z"
  name: thin
  ownerReferences:
  - apiVersion: v1
    kind: clusteroperator
    name: storage
    uid: c49c0a62-1013-11ea-b211-0050568ba4b9
  resourceVersion: "11461"
  selfLink: /apis/storage.k8s.io/v1/storageclasses/thin
  uid: 5d6041e6-1015-11ea-ba6e-0050568b89a7
parameters:
  diskformat: thin
provisioner: kubernetes.io/vsphere-volume
reclaimPolicy: Delete
volumeBindingMode: Immediate


Additional info:
1. See volume existed in snapshot. 
2. Check on the work node, not sure if something missing in the below configuration.
sh-4.4# ps -eaf | grep kubelet
root        1586       1  3 07:49 ?        00:06:50 /usr/bin/hyperkube kubelet --config=/etc/kubernetes/kubelet.conf --bootstrap-kubeconfig=/etc/kubernetes/kubeconfig --kubeconfig=/var/lib/kubelet/kubeconfig --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --node-labels=node-role.kubernetes.io/worker,node.openshift.io/os_id=rhcos --minimum-container-ttl-duration=6m0s --volume-plugin-dir=/etc/kubernetes/kubelet-plugins/volume/exec --cloud-provider= --v=3

Comment 2 Liang Xia 2019-11-26 13:00:14 UTC
Some more info about one of the PV/volume.

$ oc get pv pvc-91bd1cbc-9e92-41d0-ae14-960084f3d902
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                       STORAGECLASS   REASON   AGE
pvc-91bd1cbc-9e92-41d0-ae14-960084f3d902   1Gi        RWO            Delete           Bound    wduan/sc-resourcegroup-04   thin                    166m


$ oc get pv pvc-91bd1cbc-9e92-41d0-ae14-960084f3d902 -o yaml | grep volumePath
    volumePath: '[nvme-ds1] kubevols/qe-minmli-428-xzwsj-dynamic-pvc-91bd1cbc-9e92-41d0-ae14-960084f3d902.vmdk'


In vSphere events,
 Reconfigured compute-3 on vsphere-qe.vmware.devcluster.openshift.com in dc1. Modified: config.hardware.device(1000).device: (2000, 2002, 2001) -> (2000, 2002, 2001, 2003); Added: config.hardware.device(2003): (key = 2003, deviceInfo = (label = "Hard disk 4", summary = "1,048,576 KB"), backing = (fileName = "ds:///vmfs/volumes/5c9ce559-d9430ec0-e8d5-506b4bb49f6a/kubevols/qe-minmli-428-xzwsj-dynamic-pvc-91bd1cbc-9e92-41d0-ae14-960084f3d902.vmdk", datastore = 'vim.Datastore:c95eb2db-783e-4b6a-b867-01da64d6716e:datastore-266', backingObjectId = "", diskMode = "independent_persistent", split = false, writeThrough = false, thinProvisioned = true, eagerlyScrub = <unset>, uuid = "6000C299-4015-7f24-5b5a-7a735746b2d5", contentId = "6fee0501c61825b198c93044fffffffe", changeId = <unset>, parent = null, deltaDiskFormat = <unset>, digestEnabled = false, deltaGrainSize = <unset>, deltaDiskFormatVariant = <unset>, sharing = "sharingNone", keyId = null), connectable = null, slotInfo = null, controllerKey = 1000, unitNumber = 3, capacityInKB = 1048576, capacityInBytes = 1073741824, shares = (shares = 1000, level = "normal"), storageIOAllocation = (limit = -1, shares = (shares = 1000, level = "normal"), reservation = 0), diskObjectId = "3889-2003", vFlashCacheConfigInfo = null, iofilter = <unset>, vDiskId = null); config.extraConfig("scsi0:3.redo"): (key = "scsi0:3.redo", value = ""); Deleted: 


$ oc get pod -n wduan
NAME                  READY   STATUS              RESTARTS   AGE
sc-resourcegroup-04   0/1     ContainerCreating   0          170m


$ oc get pod sc-resourcegroup-04 -n wduan -o yaml | grep nodeName
  nodeName: compute-3


$ oc get node compute-3 -o yaml | tail -11
  volumesAttached:
  - devicePath: /dev/disk/by-id/wwn-0x6000c2994512ce66e7773f4366e9bb8a
    name: kubernetes.io/vsphere-volume/[nvme-ds1] kubevols/qe-minmli-428-xzwsj-dynamic-pvc-1ae9daea-101c-11ea-91dc-0050568b94af.vmdk
  - devicePath: /dev/disk/by-id/wwn-0x6000c29ac292917e9d030724cb6b45e4
    name: kubernetes.io/vsphere-volume/[nvme-ds1] kubevols/qe-minmli-428-xzwsj-dynamic-pvc-1aec1a38-101c-11ea-91dc-0050568b94af.vmdk
  - devicePath: /dev/disk/by-id/wwn-0x6000c29940157f245b5a7a735746b2d5
    name: kubernetes.io/vsphere-volume/[nvme-ds1] kubevols/qe-minmli-428-xzwsj-dynamic-pvc-91bd1cbc-9e92-41d0-ae14-960084f3d902.vmdk
  volumesInUse:
  - kubernetes.io/vsphere-volume/1bc67be5-5c6f-44e7-ae96-73bc93fc198c-pvc-1aec1a38-101c-11ea-91dc-0050568b94af
  - kubernetes.io/vsphere-volume/d6e9b986-807b-4b44-b077-50a8d1103255-pvc-1ae9daea-101c-11ea-91dc-0050568b94af
  - kubernetes.io/vsphere-volume/e36bc1da-d27b-4caf-ae1b-e4fbfeec0808-pvc-91bd1cbc-9e92-41d0-ae14-960084f3d902


On the node compute-3,
sh-4.4# /usr/lib/udev/scsi_id -g -u -d /dev/sdd
36000c29940157f245b5a7a735746b2d5


sh-4.4# mount | grep sdd
sh-4.4# df -h | grep sdd

sh-4.4# ls -lh /dev/sdd
brw-rw----. 1 root disk 8, 48 Nov 26 10:00 /dev/sdd

sh-4.4# fdisk -l /dev/sdd
Disk /dev/sdd: 1 GiB, 1073741824 bytes, 2097152 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

sh-4.4# ls -lh /dev/disk/by-uuid/
total 0
lrwxrwxrwx. 1 root root 10 Nov 26 08:02 477c3d77-20c6-4ff3-8bb3-dc2543eedfbd -> ../../sda3
lrwxrwxrwx. 1 root root  9 Nov 26 09:58 544b95be-c9f0-4fc9-92f3-3942ea0fc81d -> ../../sdb
lrwxrwxrwx. 1 root root 10 Nov 26 08:03 91de875e-af22-4585-91cb-e74437f6af68 -> ../../sda2
lrwxrwxrwx. 1 root root  9 Nov 26 08:04 d9ebc209-83dc-4bac-88dd-cb3eaeabbdda -> ../../sdc


$ oc get pods -n wduan -o yaml | grep -w uid
    uid: e36bc1da-d27b-4caf-ae1b-e4fbfeec0808


sh-4.4# mount | grep e36bc1da-d27b-4caf-ae1b-e4fbfeec0808
tmpfs on /var/lib/kubelet/pods/e36bc1da-d27b-4caf-ae1b-e4fbfeec0808/volumes/kubernetes.io~secret/default-token-5vkxf type tmpfs (rw,relatime,seclabel)
sh-4.4# df -h | grep e36bc1da-d27b-4caf-ae1b-e4fbfeec0808
tmpfs           3.9G   24K  3.9G   1% /var/lib/kubelet/pods/e36bc1da-d27b-4caf-ae1b-e4fbfeec0808/volumes/kubernetes.io~secret/default-token-5vkxf

Comment 3 Liang Xia 2019-11-26 13:06:39 UTC
$ oc describe pod -n wduan
Name:               sc-resourcegroup-04
Namespace:          wduan
Priority:           0
PriorityClassName:  <none>
Node:               compute-3/139.178.76.25
Start Time:         Tue, 26 Nov 2019 18:00:10 +0800
Labels:             name=frontendhttp
Annotations:        openshift.io/scc: anyuid
Status:             Pending
IP:                 
Containers:
  myfrontend:
    Container ID:   
    Image:          docker.io/aosqe/hello-openshift
    Image ID:       
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /mnt/local from local (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-5vkxf (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  local:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  sc-resourcegroup-04
    ReadOnly:   false
  default-token-5vkxf:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-5vkxf
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason       Age                    From                Message
  ----     ------       ----                   ----                -------
  Warning  FailedMount  29m (x10 over 151m)    kubelet, compute-3  Unable to attach or mount volumes: unmounted volumes=[local], unattached volumes=[default-token-5vkxf local]: timed out waiting for the condition
  Warning  FailedMount  9m48s (x92 over 3h2m)  kubelet, compute-3  (combined from similar events): MountVolume.SetUp failed for volume "pvc-91bd1cbc-9e92-41d0-ae14-960084f3d902" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/e36bc1da-d27b-4caf-ae1b-e4fbfeec0808/volumes/kubernetes.io~vsphere-volume/pvc-91bd1cbc-9e92-41d0-ae14-960084f3d902 --scope -- mount -o bind /var/lib/kubelet/plugins/kubernetes.io/vsphere-volume/mounts/[nvme-ds1] kubevols/qe-minmli-428-xzwsj-dynamic-pvc-91bd1cbc-9e92-41d0-ae14-960084f3d902.vmdk /var/lib/kubelet/pods/e36bc1da-d27b-4caf-ae1b-e4fbfeec0808/volumes/kubernetes.io~vsphere-volume/pvc-91bd1cbc-9e92-41d0-ae14-960084f3d902
Output: Running scope as unit: run-rd4c939ccc1564752b1995d2c19c3affb.scope
mount: /var/lib/kubelet/pods/e36bc1da-d27b-4caf-ae1b-e4fbfeec0808/volumes/kubernetes.io~vsphere-volume/pvc-91bd1cbc-9e92-41d0-ae14-960084f3d902: special device /var/lib/kubelet/plugins/kubernetes.io/vsphere-volume/mounts/[nvme-ds1] kubevols/qe-minmli-428-xzwsj-dynamic-pvc-91bd1cbc-9e92-41d0-ae14-960084f3d902.vmdk does not exist.
  Warning  FailedMount  4m9s (x61 over 3h2m)  kubelet, compute-3  Unable to attach or mount volumes: unmounted volumes=[local], unattached volumes=[local default-token-5vkxf]: timed out waiting for the condition

Comment 8 Jan Safranek 2019-12-02 15:47:56 UTC
*** Bug 1775685 has been marked as a duplicate of this bug. ***

Comment 9 Fabio Bertinatto 2019-12-03 12:27:35 UTC
*** Bug 1777195 has been marked as a duplicate of this bug. ***

Comment 11 Erica von Buelow 2019-12-04 15:07:10 UTC
For an update, we had some CI issues blocking merging this PR. We have resolved that problem and should be able to merge the fix into 4.3 as soon as within a couple hours: https://github.com/openshift/machine-config-operator/pull/1293

Comment 15 errata-xmlrpc 2020-01-23 11:14:34 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:0062


Note You need to log in before you can comment on or make changes to this bug.