Bug 2100946 - Avoid temporary ceph health alert for new clusters where the insecure global id is allowed longer than necessary
Summary: Avoid temporary ceph health alert for new clusters where the insecure global ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: rook
Version: 4.11
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ODF 4.11.0
Assignee: Travis Nielsen
QA Contact: Mugdha Soni
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-06-24 18:35 UTC by Travis Nielsen
Modified: 2023-12-08 04:29 UTC (History)
6 users (show)

Fixed In Version: 4.11.0-110
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-08-24 13:55:09 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github red-hat-storage rook pull 394 0 None open Bug 2100946: mon: Disable insecure global ids for new deployments 2022-06-27 23:21:43 UTC
Github rook rook pull 10505 0 None open mon: Disable insecure global ids for new deployments 2022-06-24 18:36:03 UTC
Red Hat Product Errata RHSA-2022:6156 0 None None None 2022-08-24 13:55:50 UTC

Description Travis Nielsen 2022-06-24 18:35:27 UTC
Description of problem (please be detailed as possible and provide log
snippests):

The Ceph health warning AUTH_INSECURE_GLOBAL_ID_RECLAIM_ALLOWED will be raised whenever Rook has not yet disabled the insecure global IDs setting in Ceph. This mode is necessary for upgraded clusters that may not have their clients all upgrade yet, but in new clusters we can immediately disable this setting and thus avoid the alert. 


Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?

No

Is there any workaround available to the best of your knowledge?

Wait a few minutes until the OSDs are all configured and rook finally configures the option when it checks no legacy clients are connected.


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?

1

Can this issue reproducible?

Yes

Can this issue reproduce from the UI?

New clusters will always see this even if very briefly.

If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1. Install the cluster
2. Watch closely for ceph health warnings
3. The warning goes away after a minute or two


Actual results:

The health warning is visible for a minute or two

Expected results:

The health warning is not needed in new clusters

Comment 6 Mugdha Soni 2022-08-11 12:34:48 UTC
Hi Travis 

I installed the cluster with latest OCP 4.11 and ODF 4.11 builds . I could not see any alert as mentioned above comments . I checked it in UI plus i checked ceph health immediately after the cluster was installed as mentioned below.

[root@localhost tes]# oc rsh -n openshift-storage $TOOLS_POD ceph -s
  cluster:
    id:     0dbd2615-abfa-4ca2-a29d-44a68d0f8954
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum a,b,c (age 2m)
    mgr: a(active, since 2m)
    mds: 1/1 daemons up, 1 hot standby
    osd: 3 osds: 3 up (since 88s), 3 in (since 106s)
    rgw: 1 daemon active (1 hosts, 1 zones)
 
  data:
    volumes: 1/1 healthy
    pools:   11 pools, 321 pgs
    objects: 297 objects, 37 MiB
    usage:   54 MiB used, 1.5 TiB / 1.5 TiB avail
    pgs:     321 active+clean
 
  io:
    client:   1.2 KiB/s rd, 2.3 KiB/s wr, 2 op/s rd, 0 op/s wr

-----------------------------

sh-4.4$ ceph health
HEALTH_OK


Please let me know if this was suppose to be verified some other way and i misunderstood  .

Thanks 
Mugdha

Comment 7 Travis Nielsen 2022-08-11 13:37:22 UTC
Mugdha That sounds perfect since you looked at the ceph health immediately after install and did not see any health warnings, thanks for the validation.

Comment 8 Mugdha Soni 2022-08-11 13:56:51 UTC
Based on comment #6 and comment #7 moving the bug to verified state .

Thankyou
Mugdha

Comment 10 errata-xmlrpc 2022-08-24 13:55:09 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.11.0 security, enhancement, & bugfix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:6156

Comment 11 Red Hat Bugzilla 2023-12-08 04:29:22 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days


Note You need to log in before you can comment on or make changes to this bug.