Bug 2016288 - [RFE] Defining a zone-group when deploying RGW service with cephadm
Summary: [RFE] Defining a zone-group when deploying RGW service with cephadm
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Cephadm
Version: 5.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 6.1
Assignee: Redouane Kachach Elhichou
QA Contact: Tejas
Akash Raj
URL:
Whiteboard:
Depends On:
Blocks: 2192813
TreeView+ depends on / blocked
 
Reported: 2021-10-21 08:16 UTC by Sergii Mykhailushko
Modified: 2024-07-29 04:58 UTC (History)
17 users (show)

Fixed In Version: ceph-17.2.6-17.el9cp
Doc Type: Enhancement
Doc Text:
.Ceph Object Gateway zonegroup can now be specified in the specification used by the orchestrator Previously, the orchestrator could handle setting the realm and zone for the Ceph Object Gateway. However, setting the zonegroup was not supported. With this release, users can specify a `rgw_zonegroup` parameter in the specification that is used by the orchestrator. Cephadm sets the zonegroup for Ceph Object Gateway daemons deployed from the specification.
Clone Of:
Environment:
Last Closed: 2023-06-15 09:15:29 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 48340 0 None None None 2021-10-21 09:33:49 UTC
Red Hat Issue Tracker RHCEPH-2076 0 None None None 2021-10-21 08:17:26 UTC
Red Hat Product Errata RHSA-2023:3623 0 None None None 2023-06-15 09:16:01 UTC

Description Sergii Mykhailushko 2021-10-21 08:16:02 UTC
Description of problem:

Hi,

I'm opening this as based on the upstream bug, which was however closed due to inactivity:

[ cephadm/rgw: Add rgw_zonegroup to RGWSpec ]
https://tracker.ceph.com/issues/48340

It looks like with cephadm there is no option to specify a custom zonegroup:

https://github.com/ceph/ceph/blob/master/src/pybind/mgr/cephadm/services/cephadmservice.py#L767-L781

~~~
        # set rgw_realm and rgw_zone, if present
        if spec.rgw_realm:
            ret, out, err = self.mgr.check_mon_command({
                'prefix': 'config set',
                'who': f"{utils.name_to_config_section('rgw')}.{spec.service_id}",
                'name': 'rgw_realm',
                'value': spec.rgw_realm,
            })
        if spec.rgw_zone:
            ret, out, err = self.mgr.check_mon_command({
                'prefix': 'config set',
                'who': f"{utils.name_to_config_section('rgw')}.{spec.service_id}",
                'name': 'rgw_zone',
                'value': spec.rgw_zone,
            })
~~~

As according to the upstream bug, it looks we're hardcoding the zonegroup default everywhere.


However in RHCS 4 it was possible to set it using ceph-ansible (group_vars directory):

~~~
rgw_zone: zone1
rgw_zonegroup: prod
rgw_realm: realm1
~~~

These options are then reflected in the corresponding rgw client section of the ceph.conf.

Could you please check if it's feasible to implement a way to specify  a custom zonegroup using RHCS 5 tooling?

Thanks in advance,
Sergii

Comment 31 Redouane Kachach Elhichou 2023-05-18 08:08:47 UTC
@tchandr

Comment 36 errata-xmlrpc 2023-06-15 09:15:29 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat Ceph Storage 6.1 security and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:3623


Note You need to log in before you can comment on or make changes to this bug.