Bug 1970348 - OCS CI deployment failing because of Ceph health warning for insecure global_id reclaims
Summary: OCS CI deployment failing because of Ceph health warning for insecure global_...
Keywords:
Status: VERIFIED
Alias: None
Product: Red Hat OpenShift Container Storage
Classification: Red Hat Storage
Component: rook
Version: 4.8
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ---
: OCS 4.8.0
Assignee: Travis Nielsen
QA Contact: Neha Berry
URL:
Whiteboard:
Depends On:
Blocks: 1974476 1974477
TreeView+ depends on / blocked
 
Reported: 2021-06-10 10:30 UTC by Mudit Agarwal
Modified: 2023-08-03 08:30 UTC (History)
3 users (show)

Fixed In Version: 4.8.0-416.ci
Doc Type: No Doc Update
Doc Text:
Clone Of:
: 1974476 1974477 (view as bug list)
Environment:
Last Closed:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift rook pull 252 0 None open Bug 1970348: Disable insecure global id if no insecure clients 2021-06-10 21:21:04 UTC
Github rook rook pull 8089 0 None open ceph: Disable insecure global id if no insecure clients 2021-06-10 10:31:53 UTC

Description Mudit Agarwal 2021-06-10 10:30:57 UTC
Description of problem (please be detailed as possible and provide log
snippests):


Version of all relevant components (if applicable):


Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?


Is there any workaround available to the best of your knowledge?


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?


Can this issue reproducible?


Can this issue reproduce from the UI?


If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1.
2.
3.


Actual results:


Expected results:


Additional info:

Comment 3 Mudit Agarwal 2021-06-10 10:32:39 UTC
https://ceph-downstream-jenkins-csb-storage.apps.ocp4.prod.psi.redhat.com/job/ocs-ci/415/

04:35:39 - MainThread - ocs_ci.deployment.deployment - WARNING - Ceph
health check failed with Ceph cluster health is not OK. Health:
HEALTH_WARN mons are allowing insecure global_id reclaim

Comment 7 Travis Nielsen 2021-06-10 21:21:05 UTC
Even with this fix, the warning will still exist for a short time during a fresh install or upgrade. If the Ceph health is retrieved after the operator and mons have been upgraded, but before all the csi driver, mgr, osd, mds, and other daemons have been updated, the health warning will exist. Only after all daemons have been updated will Rook remove the health warning by disabling the insecure global id connections. If there are any insecure clients still existing, they will at this point be denied access to the cluster.

Comment 8 Travis Nielsen 2021-06-10 21:21:54 UTC
Also see the CVE more more background on where this health warning is coming from: https://docs.ceph.com/en/latest/security/CVE-2021-20288


Note You need to log in before you can comment on or make changes to this bug.