Bug 1996833 - ceph-external-cluster-details-exporter.py should have a read-only mode
Summary: ceph-external-cluster-details-exporter.py should have a read-only mode
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat
Component: rook
Version: 4.8
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ODF 4.10.0
Assignee: Subham Rai
QA Contact: Vijay Avuthu
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-08-23 18:59 UTC by Lars Kellogg-Stedman
Modified: 2022-04-13 18:50 UTC (History)
7 users (show)

Fixed In Version: 4.10.0-113
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-04-13 18:49:40 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github red-hat-storage rook pull 360 0 None open Bug 1996833: core: fix incorrect join command 2022-03-14 11:53:59 UTC
Github rook rook pull 9189 0 None Draft security: add read-only mode for external cluster script 2021-11-17 06:54:42 UTC
Github rook rook pull 9862 0 None open core: fix incorrect join command 2022-03-09 07:17:39 UTC
Red Hat Product Errata RHSA-2022:1372 0 None None None 2022-04-13 18:50:35 UTC

Description Lars Kellogg-Stedman 2021-08-23 18:59:12 UTC
In an environment in which an external Ceph cluster is maintained by someone other than the group maintaining the OpenShift cluster, they be leery about allowing a random Python script to make changes to their Ceph environment.

The `ceph-external-cluster-details-exporter.py` script should have a read-only mode in which it will only collect the information necessary to create the JSON configuration blog. It should error out with appropriate messages if the administrator needs to create specific authentication principals or other resources.

Ultimately, it should be possible to configure OCS external mode without running the script at all (e.g., by entering configuration values into an appropriate form), for situations in which storage administrators are simply unwilling to run any sort of script in their environment.

Comment 2 Mudit Agarwal 2021-08-24 07:38:06 UTC
AFAIK, this script is owned by rook. Please change the component if that is not correct.

Comment 3 Subham Rai 2021-12-17 11:30:58 UTC
(In reply to Lars Kellogg-Stedman from comment #0)
> In an environment in which an external Ceph cluster is maintained by someone
> other than the group maintaining the OpenShift cluster, they be leery about
> allowing a random Python script to make changes to their Ceph environment.
> 
> The `ceph-external-cluster-details-exporter.py` script should have a
> read-only mode in which it will only collect the information necessary to
> create the JSON configuration blog. It should error out with appropriate
> messages if the administrator needs to create specific authentication
> principals or other resources.
> 
> Ultimately, it should be possible to configure OCS external mode without
> running the script at all (e.g., by entering configuration values into an
> appropriate form), for situations in which storage administrators are simply
> unwilling to run any sort of script in their environment.

The fixed is merged and instead of `read-only` we have come with `dry-run` as it makes script more idempotent

Comment 11 Vijay Avuthu 2022-03-29 05:27:59 UTC
Job: https://ocs4-jenkins-csb-odf-qe.apps.ocp-c1.prod.psi.redhat.com/job/qe-deploy-ocs-cluster/11205/consoleFull

verified dry-run option

# python /tmp/external-cluster-details-exporter-aluszk5x.py --rbd-data-pool-name rbd --rgw-endpoint <endpoint_ip>:8080 --dry-run
Execute: 'ceph fs ls'
Execute: 'ceph fsid'
Execute: 'ceph quorum_status'
Execute: 'ceph auth get-or-create client.healthchecker mon allow r, allow command quorum_status, allow command version mgr allow command config osd allow rwx pool=default.rgw.meta, allow r pool=.rgw.root, allow rw pool=default.rgw.control, allow rx pool=default.rgw.log, allow x pool=default.rgw.buckets.index'
Execute: 'ceph mgr services'
Execute: 'ceph auth get-or-create client.csi-rbd-node mon profile rbd, allow command 'osd blocklist' osd profile rbd'
Execute: 'ceph auth get-or-create client.csi-rbd-provisioner mgr allow rw mon profile rbd, allow command 'osd blocklist' osd profile rbd'
Execute: 'ceph status'
Execute: 'ceph radosgw-admin user create --uid rgw-admin-ops-user --display-name Rook RGW Admin Ops user --caps buckets=*;users=*;usage=read;metadata=read;zone=read'

# python /tmp/external-cluster-details-exporter-aluszk5x.py -h | grep -i dry
                                                     [--dry-run]
  --dry-run             Dry run prints the executed commands without running


# python /tmp/external-cluster-details-exporter-aluszk5x.py --rbd-data-pool-name rbd  --dry-run
Execute: 'ceph fs ls'
Execute: 'ceph fsid'
Execute: 'ceph quorum_status'
Execute: 'ceph auth get-or-create client.healthchecker mon allow r, allow command quorum_status, allow command version mgr allow command config osd allow rwx pool=default.rgw.meta, allow r pool=.rgw.root, allow rw pool=default.rgw.control, allow rx pool=default.rgw.log, allow x pool=default.rgw.buckets.index'
Execute: 'ceph mgr services'
Execute: 'ceph auth get-or-create client.csi-rbd-node mon profile rbd, allow command 'osd blocklist' osd profile rbd'
Execute: 'ceph auth get-or-create client.csi-rbd-provisioner mgr allow rw mon profile rbd, allow command 'osd blocklist' osd profile rbd'
Execute: 'ceph status'

Changing status to Verified

Comment 13 errata-xmlrpc 2022-04-13 18:49:40 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.10.0 enhancement, security & bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:1372


Note You need to log in before you can comment on or make changes to this bug.