Bug 2100946
| Summary: | Avoid temporary ceph health alert for new clusters where the insecure global id is allowed longer than necessary | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat OpenShift Data Foundation | Reporter: | Travis Nielsen <tnielsen> |
| Component: | rook | Assignee: | Travis Nielsen <tnielsen> |
| Status: | CLOSED ERRATA | QA Contact: | Mugdha Soni <musoni> |
| Severity: | unspecified | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 4.11 | CC: | kramdoss, madam, muagarwa, nberry, ocs-bugs, odf-bz-bot |
| Target Milestone: | --- | ||
| Target Release: | ODF 4.11.0 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | 4.11.0-110 | Doc Type: | No Doc Update |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2022-08-24 13:55:09 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Travis Nielsen
2022-06-24 18:35:27 UTC
Hi Travis
I installed the cluster with latest OCP 4.11 and ODF 4.11 builds . I could not see any alert as mentioned above comments . I checked it in UI plus i checked ceph health immediately after the cluster was installed as mentioned below.
[root@localhost tes]# oc rsh -n openshift-storage $TOOLS_POD ceph -s
cluster:
id: 0dbd2615-abfa-4ca2-a29d-44a68d0f8954
health: HEALTH_OK
services:
mon: 3 daemons, quorum a,b,c (age 2m)
mgr: a(active, since 2m)
mds: 1/1 daemons up, 1 hot standby
osd: 3 osds: 3 up (since 88s), 3 in (since 106s)
rgw: 1 daemon active (1 hosts, 1 zones)
data:
volumes: 1/1 healthy
pools: 11 pools, 321 pgs
objects: 297 objects, 37 MiB
usage: 54 MiB used, 1.5 TiB / 1.5 TiB avail
pgs: 321 active+clean
io:
client: 1.2 KiB/s rd, 2.3 KiB/s wr, 2 op/s rd, 0 op/s wr
-----------------------------
sh-4.4$ ceph health
HEALTH_OK
Please let me know if this was suppose to be verified some other way and i misunderstood .
Thanks
Mugdha
Mugdha That sounds perfect since you looked at the ceph health immediately after install and did not see any health warnings, thanks for the validation. Based on comment #6 and comment #7 moving the bug to verified state . Thankyou Mugdha Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.11.0 security, enhancement, & bugfix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:6156 The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days |