Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
This project is now read‑only. Starting Monday, February 2, please use https://ibm-ceph.atlassian.net/ for all bug tracking management.

Bug 1952573

Summary: [ceph-ansible] : rolling_update : only one of the RGW services were restarted when there were two RGWs
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Vasishta <vashastr>
Component: Ceph-AnsibleAssignee: Guillaume Abrioux <gabrioux>
Status: CLOSED NOTABUG QA Contact: Ameena Suhani S H <amsyedha>
Severity: urgent Docs Contact:
Priority: unspecified    
Version: 5.0CC: aschoen, ceph-eng-bugs, gmeno, nthomas, vereddy, ykaul
Target Milestone: ---Keywords: AutomationBlocker, Regression
Target Release: 5.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-04-27 15:15:20 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Vasishta 2021-04-22 14:58:01 UTC
Description of problem:
Only one out of two RGW daemons were restarted on each RGW node when rolling_updated was tried from rhcs 4.x to 5.x

Version-Release number of selected component (if applicable):
ceph-ansible-6.0.3-1.el8cp.noarch

How reproducible:
Tried once

Steps to Reproduce:
1. Configure a 4.x cluster with nodes having multiple rgw daemons per node
2. Run rolling update to upgrade cluster to 5.x

Actual results:
"rgw": {
            "ceph version 14.2.11-139.el8cp (b8e1f91c99491fb2e5ede748a1c0738ed158d0f5) nautilus (stable)": 3,
            "ceph version 16.2.0-10.el8cp (e84b678f68605de54156f957685d0b7fee77ddf8) pacific (stable)": 3
        },

Expected results:
All RGW daemons must be restarted after altering systemd files

Additional info:

Comment 2 Vasishta 2021-04-22 15:04:22 UTC
Restarting service manually worked fine

$ sudo podman ps
..
42ef40f44783  registry.redhat.io/rhceph/rhceph-4-rhel8:latest                                                                          2 hours ago  Up 2 hours ago          ceph-rgw-vasi-1619090929032-node5-rgw-iscsi-gw-rgw1
..               
$ sudo systemctl restart ceph-radosgw.rgw1.service
$ sudo podman ps |grep rgw1
74b6c8959171  registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-69664-20210416192140           23 seconds ago  Up 22 seconds ago          ceph-rgw-vasi-1619090929032-node1-mon-mgr-osd-rgw-rgw1