Description of problem (please be detailed as possible and provide log snippests): The Ceph health warning AUTH_INSECURE_GLOBAL_ID_RECLAIM_ALLOWED will be raised whenever Rook has not yet disabled the insecure global IDs setting in Ceph. This mode is necessary for upgraded clusters that may not have their clients all upgrade yet, but in new clusters we can immediately disable this setting and thus avoid the alert. Does this issue impact your ability to continue to work with the product (please explain in detail what is the user impact)? No Is there any workaround available to the best of your knowledge? Wait a few minutes until the OSDs are all configured and rook finally configures the option when it checks no legacy clients are connected. Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? 1 Can this issue reproducible? Yes Can this issue reproduce from the UI? New clusters will always see this even if very briefly. If this is a regression, please provide more details to justify this: Steps to Reproduce: 1. Install the cluster 2. Watch closely for ceph health warnings 3. The warning goes away after a minute or two Actual results: The health warning is visible for a minute or two Expected results: The health warning is not needed in new clusters
Hi Travis I installed the cluster with latest OCP 4.11 and ODF 4.11 builds . I could not see any alert as mentioned above comments . I checked it in UI plus i checked ceph health immediately after the cluster was installed as mentioned below. [root@localhost tes]# oc rsh -n openshift-storage $TOOLS_POD ceph -s cluster: id: 0dbd2615-abfa-4ca2-a29d-44a68d0f8954 health: HEALTH_OK services: mon: 3 daemons, quorum a,b,c (age 2m) mgr: a(active, since 2m) mds: 1/1 daemons up, 1 hot standby osd: 3 osds: 3 up (since 88s), 3 in (since 106s) rgw: 1 daemon active (1 hosts, 1 zones) data: volumes: 1/1 healthy pools: 11 pools, 321 pgs objects: 297 objects, 37 MiB usage: 54 MiB used, 1.5 TiB / 1.5 TiB avail pgs: 321 active+clean io: client: 1.2 KiB/s rd, 2.3 KiB/s wr, 2 op/s rd, 0 op/s wr ----------------------------- sh-4.4$ ceph health HEALTH_OK Please let me know if this was suppose to be verified some other way and i misunderstood . Thanks Mugdha
Mugdha That sounds perfect since you looked at the ceph health immediately after install and did not see any health warnings, thanks for the validation.
Based on comment #6 and comment #7 moving the bug to verified state . Thankyou Mugdha
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.11.0 security, enhancement, & bugfix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:6156
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days