Bug 2169779
Summary: | [vSphere]: rook-ceph-mon-* pvc are in pending state | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat OpenShift Data Foundation | Reporter: | Vijay Avuthu <vavuthu> |
Component: | rook | Assignee: | Subham Rai <srai> |
Status: | CLOSED ERRATA | QA Contact: | Petr Balogh <pbalogh> |
Severity: | urgent | Docs Contact: | |
Priority: | unspecified | ||
Version: | 4.13 | CC: | hekumar, muagarwa, nberry, nigoyal, ocs-bugs, odf-bz-bot, pbalogh, srai, tnielsen |
Target Milestone: | --- | Keywords: | Automation |
Target Release: | ODF 4.13.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | No Doc Update | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2023-06-21 15:23:59 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Vijay Avuthu
2023-02-14 16:35:55 UTC
The provisioner is not creating and binding the PV: waiting for a volume to be created, either by external provisioner "csi.vsphere.vmware.com" or manually created by system administrator Can you create any other test pod that provisions a volume from the thin-csi storage class? The thin-csi provisioner doesn't appear to be working. Can you try testing on the latest 4.13 builds ex: `4.13.0-86`, there was a fix that is included in `4.13.0-85`. @pbalogh is cluster live? Once Nitin change the storageclass name, were able to see the error mentioned in the logs bug those errors come from k8s or ocp where you forgot to apply some security labels in a namespace, I tried applying some labels in the namespace and the error was restricted to ``` 2023-02-22 07:53:22.763567 I | op-mon: waiting for canary pod creation rook-ceph-mon-b-canary W0222 07:53:22.971775 1 warnings.go:70] would violate PodSecurity "baseline:latest": hostPath volumes (volumes "ceph-daemons-sock-dir", "rook-ceph-log", "rook-ceph-crash"), privileged (containers "mon", "log-collector" must not set securityContext.privileged=true) ``` I applied this label ```kubectl label --overwrite ns openshift-storage \ pod-security.kubernetes.io/enforce=privileged \ pod-security.kubernetes.io/warn=baseline \ pod-security.kubernetes.io/audit=baseline``` and the error was limited to one single error mentioned. Please check that can we close this bz? Moving to ON_QA to verify the resolution. everything place, removing my needinfo Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat OpenShift Data Foundation 4.13.0 enhancement and bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2023:3742 |