Bug 1970348
| Summary: | OCS CI deployment failing because of Ceph health warning for insecure global_id reclaims | |||
|---|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat OpenShift Container Storage | Reporter: | Mudit Agarwal <muagarwa> | |
| Component: | rook | Assignee: | Travis Nielsen <tnielsen> | |
| Status: | VERIFIED --- | QA Contact: | Neha Berry <nberry> | |
| Severity: | urgent | Docs Contact: | ||
| Priority: | unspecified | |||
| Version: | 4.8 | CC: | muagarwa, nberry, owasserm | |
| Target Milestone: | --- | Keywords: | AutomationBackLog | |
| Target Release: | OCS 4.8.0 | |||
| Hardware: | Unspecified | |||
| OS: | Unspecified | |||
| Whiteboard: | ||||
| Fixed In Version: | 4.8.0-416.ci | Doc Type: | No Doc Update | |
| Doc Text: | Story Points: | --- | ||
| Clone Of: | ||||
| : | 1974476 1974477 (view as bug list) | Environment: | ||
| Last Closed: | Type: | Bug | ||
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
| Bug Depends On: | ||||
| Bug Blocks: | 1974476, 1974477 | |||
|
Description
Mudit Agarwal
2021-06-10 10:30:57 UTC
https://ceph-downstream-jenkins-csb-storage.apps.ocp4.prod.psi.redhat.com/job/ocs-ci/415/ 04:35:39 - MainThread - ocs_ci.deployment.deployment - WARNING - Ceph health check failed with Ceph cluster health is not OK. Health: HEALTH_WARN mons are allowing insecure global_id reclaim Even with this fix, the warning will still exist for a short time during a fresh install or upgrade. If the Ceph health is retrieved after the operator and mons have been upgraded, but before all the csi driver, mgr, osd, mds, and other daemons have been updated, the health warning will exist. Only after all daemons have been updated will Rook remove the health warning by disabling the insecure global id connections. If there are any insecure clients still existing, they will at this point be denied access to the cluster. Also see the CVE more more background on where this health warning is coming from: https://docs.ceph.com/en/latest/security/CVE-2021-20288 |