Bug 2100946

Summary: Avoid temporary ceph health alert for new clusters where the insecure global id is allowed longer than necessary
Product: [Red Hat Storage] Red Hat OpenShift Data Foundation Reporter: Travis Nielsen <tnielsen>
Component: rookAssignee: Travis Nielsen <tnielsen>
Status: CLOSED ERRATA QA Contact: Mugdha Soni <musoni>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 4.11CC: kramdoss, madam, muagarwa, nberry, ocs-bugs, odf-bz-bot
Target Milestone: ---   
Target Release: ODF 4.11.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: 4.11.0-110 Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2022-08-24 13:55:09 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Travis Nielsen 2022-06-24 18:35:27 UTC
Description of problem (please be detailed as possible and provide log
snippests):

The Ceph health warning AUTH_INSECURE_GLOBAL_ID_RECLAIM_ALLOWED will be raised whenever Rook has not yet disabled the insecure global IDs setting in Ceph. This mode is necessary for upgraded clusters that may not have their clients all upgrade yet, but in new clusters we can immediately disable this setting and thus avoid the alert. 


Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?

No

Is there any workaround available to the best of your knowledge?

Wait a few minutes until the OSDs are all configured and rook finally configures the option when it checks no legacy clients are connected.


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?

1

Can this issue reproducible?

Yes

Can this issue reproduce from the UI?

New clusters will always see this even if very briefly.

If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1. Install the cluster
2. Watch closely for ceph health warnings
3. The warning goes away after a minute or two


Actual results:

The health warning is visible for a minute or two

Expected results:

The health warning is not needed in new clusters

Comment 6 Mugdha Soni 2022-08-11 12:34:48 UTC
Hi Travis 

I installed the cluster with latest OCP 4.11 and ODF 4.11 builds . I could not see any alert as mentioned above comments . I checked it in UI plus i checked ceph health immediately after the cluster was installed as mentioned below.

[root@localhost tes]# oc rsh -n openshift-storage $TOOLS_POD ceph -s
  cluster:
    id:     0dbd2615-abfa-4ca2-a29d-44a68d0f8954
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum a,b,c (age 2m)
    mgr: a(active, since 2m)
    mds: 1/1 daemons up, 1 hot standby
    osd: 3 osds: 3 up (since 88s), 3 in (since 106s)
    rgw: 1 daemon active (1 hosts, 1 zones)
 
  data:
    volumes: 1/1 healthy
    pools:   11 pools, 321 pgs
    objects: 297 objects, 37 MiB
    usage:   54 MiB used, 1.5 TiB / 1.5 TiB avail
    pgs:     321 active+clean
 
  io:
    client:   1.2 KiB/s rd, 2.3 KiB/s wr, 2 op/s rd, 0 op/s wr

-----------------------------

sh-4.4$ ceph health
HEALTH_OK


Please let me know if this was suppose to be verified some other way and i misunderstood  .

Thanks 
Mugdha

Comment 7 Travis Nielsen 2022-08-11 13:37:22 UTC
Mugdha That sounds perfect since you looked at the ceph health immediately after install and did not see any health warnings, thanks for the validation.

Comment 8 Mugdha Soni 2022-08-11 13:56:51 UTC
Based on comment #6 and comment #7 moving the bug to verified state .

Thankyou
Mugdha

Comment 10 errata-xmlrpc 2022-08-24 13:55:09 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.11.0 security, enhancement, & bugfix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:6156

Comment 11 Red Hat Bugzilla 2023-12-08 04:29:22 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days