Description of problem (please be detailed as possible and provide log snippests): Version of all relevant components (if applicable): Does this issue impact your ability to continue to work with the product (please explain in detail what is the user impact)? Is there any workaround available to the best of your knowledge? Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? Can this issue reproducible? Can this issue reproduce from the UI? If this is a regression, please provide more details to justify this: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
https://ceph-downstream-jenkins-csb-storage.apps.ocp4.prod.psi.redhat.com/job/ocs-ci/415/ 04:35:39 - MainThread - ocs_ci.deployment.deployment - WARNING - Ceph health check failed with Ceph cluster health is not OK. Health: HEALTH_WARN mons are allowing insecure global_id reclaim
Even with this fix, the warning will still exist for a short time during a fresh install or upgrade. If the Ceph health is retrieved after the operator and mons have been upgraded, but before all the csi driver, mgr, osd, mds, and other daemons have been updated, the health warning will exist. Only after all daemons have been updated will Rook remove the health warning by disabling the insecure global id connections. If there are any insecure clients still existing, they will at this point be denied access to the cluster.
Also see the CVE more more background on where this health warning is coming from: https://docs.ceph.com/en/latest/security/CVE-2021-20288