Waiting backport to pacific
@Juan, We do not see the issue when we deploy RGW services. However, this issue was logged for tracking as we were seeing unknown services getting created during the RGW related activity and there was no clear steps to repro the issue as we do not know when we can hit this issue. For now, I do not see any such services getting created hence moving it to Verified. Will create a new BZ if we see anything in coming days. cluster: id: d8a1d97c-7cbb-11eb-82af-002590fc26f6 health: HEALTH_OK services: mon: 3 daemons, quorum magna011,magna014,magna013 (age 4d) mgr: magna011.vpdjxa(active, since 4d), standbys: magna014.pmkeku, magna013.evxipz osd: 12 osds: 12 up (since 4d), 12 in (since 4d) rgw: 5 daemons active (rgw_bz.TESTzone.magna014.mpujcu, rgw_bz.magna013.agpwvb, rgw_bz.magna014.oivsyb, rgw_bz.magna016.jerqyn, rgw_bz_new.TESTzone_new.magna016.usqivp) data: pools: 10 pools, 928 pgs objects: 754 objects, 166 KiB usage: 3.3 GiB used, 11 TiB / 11 TiB avail pgs: 928 active+clean io: client: 21 KiB/s rd, 0 B/s wr, 20 op/s rd, 10 op/s wr progress: Global Recovery Event (87m) [====........................] (remaining: 8h) [ceph: root@magna011 /]# ceph orch ls NAME RUNNING REFRESHED AGE PLACEMENT IMAGE NAME IMAGE ID alertmanager 0/1 - - count:1 <unknown> <unknown> crash 4/4 8m ago 4d * registry-proxy.engineering.redhat.com/rh-osbs/rhceph@sha256:b2ca10515af7e243732ac10b43f68a0d218d9a34421ec3b807bdc33d58c5c00f 38e52bf51cef grafana 0/1 - - count:1 <unknown> <unknown> mgr 3/3 8m ago 4d magna011;magna013;magna014;count:3 mix 38e52bf51cef mon 3/3 8m ago 4d magna011;magna013;magna014;count:3 mix 38e52bf51cef node-exporter 4/4 8m ago 4d * registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.5 f0a5cfd22f16 osd.all-available-devices 12/12 8m ago 4d * registry-proxy.engineering.redhat.com/rh-osbs/rhceph@sha256:b2ca10515af7e243732ac10b43f68a0d218d9a34421ec3b807bdc33d58c5c00f 38e52bf51cef prometheus 0/1 - - count:1 <unknown> <unknown> rgw.rgw_bz 3/3 8m ago 3d magna013;magna014;magna016 registry-proxy.engineering.redhat.com/rh-osbs/rhceph@sha256:b2ca10515af7e243732ac10b43f68a0d218d9a34421ec3b807bdc33d58c5c00f 38e52bf51cef rgw.rgw_bz.TESTzone 1/1 8m ago 112m magna014;count:1 registry-proxy.engineering.redhat.com/rh-osbs/rhceph@sha256:b2ca10515af7e243732ac10b43f68a0d218d9a34421ec3b807bdc33d58c5c00f 38e52bf51cef rgw.rgw_bz_new.TESTzone_new 1/1 7m ago 89m magna016;count:1 registry-proxy.engineering.redhat.com/rh-osbs/rhceph@sha256:b2ca10515af7e243732ac10b43f68a0d218d9a34421ec3b807bdc33d58c5c00f 38e52bf51cef
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 5.0 bug fix and enhancement), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:3294