Bug 2024617 - vSphere CSI tests constantly failing with Rollout of the monitoring stack failed and is degraded
Summary: vSphere CSI tests constantly failing with Rollout of the monitoring stack fai...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Storage
Version: 4.10
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 4.10.0
Assignee: Jan Safranek
QA Contact: Wei Duan
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-11-18 14:00 UTC by Jan Safranek
Modified: 2022-03-10 16:29 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-03-10 16:29:22 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift release pull 23769 0 None open Bug 2024617: Fix vSphere CSI jobs on 4.10 2021-11-18 14:23:23 UTC
Red Hat Product Errata RHSA-2022:0056 0 None None None 2022-03-10 16:29:38 UTC

Description Jan Safranek 2021-11-18 14:00:26 UTC
Description of problem:
periodic-ci-openshift-release-master-nightly-4.10-e2e-vsphere-csi is failing since Oct 06.

The last passes run: https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/periodic-ci-openshift-release-master-nightly-4.10-e2e-vsphere-csi/1445658827700572160

The first failed run: https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/periodic-ci-openshift-release-master-nightly-4.10-e2e-vsphere-csi/1445840023361425408

From the first failed run:

monitoring is Degraded:

"message": "Failed to rollout the stack. Error: updating prometheus-k8s: waiting for Prometheus object changes failed: waiting for Prometheus openshift-monitoring/k8s: expected 2 replicas, got 0 updated replicas",
"reason": "UpdatingPrometheusK8SFailed",


Prometheus pod prometheus-k8s-0 did not start:

"message": "MountVolume.SetUp failed for volume \"pvc-f0a2f1c4-8a2f-4b4d-b59a-91254038dc7b\" : mount failed: exit status 32\nMounting command: mount\nMounting arguments:  -o bind /var/lib/kubelet/plugins/kubernetes.io/vsphere-volume/mounts/[WorkloadDatastore] 5137595f-7ce3-e95a-5c03-06d835dea807/ci-op-kigcjws7-55b1b-4-pvc-f0a2f1c4-8a2f-4b4d-b59a-91254038dc7b.vmdk /var/lib/kubelet/pods/64b27e85-0919-40cb-8cf9-9ce4f3f21612/volumes/kubernetes.io~vsphere-volume/pvc-f0a2f1c4-8a2f-4b4d-b59a-91254038dc7b\nOutput: mount: /var/lib/kubelet/pods/64b27e85-0919-40cb-8cf9-9ce4f3f21612/volumes/kubernetes.io~vsphere-volume/pvc-f0a2f1c4-8a2f-4b4d-b59a-91254038dc7b: special device /var/lib/kubelet/plugins/kubernetes.io/vsphere-volume/mounts/[WorkloadDatastore] 5137595f-7ce3-e95a-5c03-06d835dea807/ci-op-kigcjws7-55b1b-4-pvc-f0a2f1c4-8a2f-4b4d-b59a-91254038dc7b.vmdk does not exist.\n",


I did not dig deeper.

Comment 1 Jan Safranek 2021-11-18 14:07:21 UTC
It's probably caused by broken CSI migration in the vSphere CSI driver. Unfortunately, the CI job enables both vSphere CSI driver installation *and* CSI migration for vSphere using TechPreviewNoUpgrade.

Comment 3 Wei Duan 2021-11-22 04:30:53 UTC
Will wait more time to monitor the 4.10 ci successful and 4.8/4.9 not impacted.

Comment 4 Wei Duan 2021-11-26 04:39:52 UTC
Checked the "operator conditions monitoring" did not fail in latest ci job, and from the must-gather log, the TechPreviewNoUpgrade is not enabled.  
Verified pass.

Comment 7 errata-xmlrpc 2022-03-10 16:29:22 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.10.3 security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:0056


Note You need to log in before you can comment on or make changes to this bug.