Bug 2016288

Summary: [RFE] Defining a zone-group when deploying RGW service with cephadm
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Sergii Mykhailushko <smykhail>
Component: CephadmAssignee: Redouane Kachach Elhichou <rkachach>
Status: CLOSED ERRATA QA Contact: Tejas <tchandra>
Severity: medium Docs Contact: Akash Raj <akraj>
Priority: medium    
Version: 5.0CC: adking, akraj, kdreyer, lithomas, mgowri, mkasturi, mmuench, mobisht, prprakas, rkachach, rsachere, saraut, sostapov, tchandra, tserlin, vdas, vereddy
Target Milestone: ---Keywords: FutureFeature
Target Release: 6.1   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ceph-17.2.6-17.el9cp Doc Type: Enhancement
Doc Text:
.Ceph Object Gateway zonegroup can now be specified in the specification used by the orchestrator Previously, the orchestrator could handle setting the realm and zone for the Ceph Object Gateway. However, setting the zonegroup was not supported. With this release, users can specify a `rgw_zonegroup` parameter in the specification that is used by the orchestrator. Cephadm sets the zonegroup for Ceph Object Gateway daemons deployed from the specification.
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-06-15 09:15:29 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 2192813    

Description Sergii Mykhailushko 2021-10-21 08:16:02 UTC
Description of problem:

Hi,

I'm opening this as based on the upstream bug, which was however closed due to inactivity:

[ cephadm/rgw: Add rgw_zonegroup to RGWSpec ]
https://tracker.ceph.com/issues/48340

It looks like with cephadm there is no option to specify a custom zonegroup:

https://github.com/ceph/ceph/blob/master/src/pybind/mgr/cephadm/services/cephadmservice.py#L767-L781

~~~
        # set rgw_realm and rgw_zone, if present
        if spec.rgw_realm:
            ret, out, err = self.mgr.check_mon_command({
                'prefix': 'config set',
                'who': f"{utils.name_to_config_section('rgw')}.{spec.service_id}",
                'name': 'rgw_realm',
                'value': spec.rgw_realm,
            })
        if spec.rgw_zone:
            ret, out, err = self.mgr.check_mon_command({
                'prefix': 'config set',
                'who': f"{utils.name_to_config_section('rgw')}.{spec.service_id}",
                'name': 'rgw_zone',
                'value': spec.rgw_zone,
            })
~~~

As according to the upstream bug, it looks we're hardcoding the zonegroup default everywhere.


However in RHCS 4 it was possible to set it using ceph-ansible (group_vars directory):

~~~
rgw_zone: zone1
rgw_zonegroup: prod
rgw_realm: realm1
~~~

These options are then reflected in the corresponding rgw client section of the ceph.conf.

Could you please check if it's feasible to implement a way to specify  a custom zonegroup using RHCS 5 tooling?

Thanks in advance,
Sergii

Comment 31 Redouane Kachach Elhichou 2023-05-18 08:08:47 UTC
@tchandr

Comment 36 errata-xmlrpc 2023-06-15 09:15:29 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat Ceph Storage 6.1 security and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:3623