Bug 1898200 - [cephadm]5.0 - rgw.rgw.rgw service is seen which is unknown in ceph orch ls command
Summary: [cephadm]5.0 - rgw.rgw.rgw service is seen which is unknown in ceph orch ls ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Cephadm
Version: 5.0
Hardware: x86_64
OS: Linux
medium
high
Target Milestone: ---
: 5.0
Assignee: Juan Miguel Olmo
QA Contact: Vasishta
Karen Norteman
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-11-16 16:11 UTC by Preethi
Modified: 2021-08-30 08:27 UTC (History)
3 users (show)

Fixed In Version: ceph-16.1.0-486.el8cp
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-08-30 08:27:12 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 48597 0 None None None 2021-01-12 15:25:48 UTC
Github ceph ceph pull 38883 0 None closed mgr/cephadm: Purge deleted services 2021-02-16 15:36:22 UTC
Red Hat Issue Tracker RHCEPH-1195 0 None None None 2021-08-30 00:16:17 UTC
Red Hat Product Errata RHBA-2021:3294 0 None None None 2021-08-30 08:27:26 UTC

Comment 1 Juan Miguel Olmo 2021-02-16 15:36:22 UTC
Waiting backport to pacific

Comment 6 Preethi 2021-03-08 18:13:34 UTC
@Juan, We do not see the issue when we deploy RGW services. However, this issue was logged for tracking as we were seeing unknown services getting created during the RGW related activity and there was no clear steps to repro the issue as we do not know when we can hit this issue. For now, I do not see any such services getting created hence moving it to Verified. Will create a new BZ if we see anything in coming days.

  cluster:
    id:     d8a1d97c-7cbb-11eb-82af-002590fc26f6
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum magna011,magna014,magna013 (age 4d)
    mgr: magna011.vpdjxa(active, since 4d), standbys: magna014.pmkeku, magna013.evxipz
    osd: 12 osds: 12 up (since 4d), 12 in (since 4d)
    rgw: 5 daemons active (rgw_bz.TESTzone.magna014.mpujcu, rgw_bz.magna013.agpwvb, rgw_bz.magna014.oivsyb, rgw_bz.magna016.jerqyn, rgw_bz_new.TESTzone_new.magna016.usqivp)
 
  data:
    pools:   10 pools, 928 pgs
    objects: 754 objects, 166 KiB
    usage:   3.3 GiB used, 11 TiB / 11 TiB avail
    pgs:     928 active+clean
 
  io:
    client:   21 KiB/s rd, 0 B/s wr, 20 op/s rd, 10 op/s wr
 
  progress:
    Global Recovery Event (87m)
      [====........................] (remaining: 8h)
 
[ceph: root@magna011 /]# ceph orch ls
NAME                         RUNNING  REFRESHED  AGE   PLACEMENT                           IMAGE NAME                                                                                                                    IMAGE ID      
alertmanager                     0/1  -          -     count:1                             <unknown>                                                                                                                     <unknown>     
crash                            4/4  8m ago     4d    *                                   registry-proxy.engineering.redhat.com/rh-osbs/rhceph@sha256:b2ca10515af7e243732ac10b43f68a0d218d9a34421ec3b807bdc33d58c5c00f  38e52bf51cef  
grafana                          0/1  -          -     count:1                             <unknown>                                                                                                                     <unknown>     
mgr                              3/3  8m ago     4d    magna011;magna013;magna014;count:3  mix                                                                                                                           38e52bf51cef  
mon                              3/3  8m ago     4d    magna011;magna013;magna014;count:3  mix                                                                                                                           38e52bf51cef  
node-exporter                    4/4  8m ago     4d    *                                   registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.5                                                               f0a5cfd22f16  
osd.all-available-devices      12/12  8m ago     4d    *                                   registry-proxy.engineering.redhat.com/rh-osbs/rhceph@sha256:b2ca10515af7e243732ac10b43f68a0d218d9a34421ec3b807bdc33d58c5c00f  38e52bf51cef  
prometheus                       0/1  -          -     count:1                             <unknown>                                                                                                                     <unknown>     
rgw.rgw_bz                       3/3  8m ago     3d    magna013;magna014;magna016          registry-proxy.engineering.redhat.com/rh-osbs/rhceph@sha256:b2ca10515af7e243732ac10b43f68a0d218d9a34421ec3b807bdc33d58c5c00f  38e52bf51cef  
rgw.rgw_bz.TESTzone              1/1  8m ago     112m  magna014;count:1                    registry-proxy.engineering.redhat.com/rh-osbs/rhceph@sha256:b2ca10515af7e243732ac10b43f68a0d218d9a34421ec3b807bdc33d58c5c00f  38e52bf51cef  
rgw.rgw_bz_new.TESTzone_new      1/1  7m ago     89m   magna016;count:1                    registry-proxy.engineering.redhat.com/rh-osbs/rhceph@sha256:b2ca10515af7e243732ac10b43f68a0d218d9a34421ec3b807bdc33d58c5c00f  38e52bf51cef

Comment 8 errata-xmlrpc 2021-08-30 08:27:12 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 5.0 bug fix and enhancement), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:3294


Note You need to log in before you can comment on or make changes to this bug.