Bug 2153198

Summary: [RFE] Allow specifying a specific realm for the RGW endpoint in the create-external-cluster-resources.py external mode script
Product: [Red Hat Storage] Red Hat OpenShift Data Foundation Reporter: daniel parkes <dparkes>
Component: rookAssignee: Parth Arora <paarora>
Status: CLOSED COMPLETED QA Contact: Neha Berry <nberry>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 4.12CC: brgardne, hemoller, hnallurv, lithomas, mduasope, muagarwa, odf-bz-bot, paarora, thottanjiffin, tnielsen
Target Milestone: ---Keywords: FutureFeature
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-08-07 08:49:47 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description daniel parkes 2022-12-14 09:21:36 UTC
Description of problem (please be detailed as possible and provide log
snippets):


Customers want to use a specific realm for each OCP/ODF cluster they connect to external Ceph using the create-external-cluster-resources.py python script.


So it looks like this:

OCP/ODF1 -> RGW ENDPOINT 1 -> REALM 1 -> Dedicated Pools for REALM1 -> External RHCS

OCP/ODF2 -> RGW ENDPOINT 2 -> REALM 2 -> Dedicated Pools for REALM2 -> External RHCS


In the current python script, the rgw-admin commands don't specify a rgw-realm; for example:

https://github.com/rook/rook/blob/c4b9bea20e026acee1d0cbdde60f5f3738cfea6d/deploy/examples/create-external-cluster-resources.py#L1097-L1108

So the command always runs on the default realm configured. This leads to the following situation, we run the script on OCP/ODF1 with Realm1 it works fine, but when running the script on the second OCP/ODF2 using Realm2, the python script fails because the rgw-admin-ops-user only get's created on the default realm

RGW endpoint for realm1: 10.10.10.10:9080
RGW endpoint for realm2: 10.10.10.11:9080

OCP/ODF cluster 1 works ok:
# python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name rbd  --rgw-endpoint 10.10.10.10:9080 --rgw-pool-prefix zone1 --run-as-user client.ocp1

OCP/ODF cluster2 fails:
# python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name rbd  --rgw-endpoint 10.10.10.11:9080 --rgw-pool-prefix zone2 --run-as-user client.ocp2

Execution Failed: The provided rgw Endpoint, '10.10.10.11:9080', is invalid. We are validating by calling the adminops api through rgw-endpoint and validating the cluster_id '' is equal to the ceph cluster fsid '31d337a6-700e-11ed-b046-566fde9200c7'

# radosgw-admin user list --rgw-realm=realm2
[
    "user01",
    "dashboard"
]


Would it be possible to add a realm option or something along those line to the python exporter script, so it can be used with multiple realms?.

Thanks.

Comment 3 Travis Nielsen 2022-12-14 21:05:06 UTC
Moving non-blockers to 4.13

Comment 6 Parth Arora 2023-01-24 14:57:22 UTC
Does realm be provided by the user while running the script?
So when we create create_rgw_admin_ops_user we can mention it?

Comment 12 Parth Arora 2023-02-27 13:22:33 UTC
I see zone and zonegroup are also needed for multisite purpose,

But do you asking that during user creation these flags should also be passed?
https://github.com/rook/rook/blob/c4b9bea20e026acee1d0cbdde60f5f3738cfea6d/deploy/examples/create-external-cluster-resources.py#L1097-L1108

Comment 14 Heưin 2023-03-20 14:33:24 UTC
(In reply to Parth Arora from comment #12)
> I see zone and zonegroup are also needed for multisite purpose,
> 
> But do you asking that during user creation these flags should also be
> passed?
> https://github.com/rook/rook/blob/c4b9bea20e026acee1d0cbdde60f5f3738cfea6d/
> deploy/examples/create-external-cluster-resources.py#L1097-L1108

yes, when linking the external ceph cluster and odf, we should be able to define custom values for `radosgw-admin` `--rgw-realm`, `--rgw-zonegroup` and `--rgw-zone` instead of hard-failing back to the default setup.

Comment 21 daniel parkes 2023-04-11 15:45:58 UTC
Hi, I'm afraid I don't have a test cluster at my disposal right now, to be able to test the new updated script