Bug 2254159 - odf prep script for odf-external doesn't honour the --run-as-user parameter
Summary: odf prep script for odf-external doesn't honour the --run-as-user parameter
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: rook
Version: 4.14
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: ODF 4.15.0
Assignee: Parth Arora
QA Contact: Vijay Avuthu
URL:
Whiteboard:
Depends On:
Blocks: 2254345
TreeView+ depends on / blocked
 
Reported: 2023-12-12 11:27 UTC by Heðin
Modified: 2024-03-19 15:29 UTC (History)
10 users (show)

Fixed In Version: 4.15.0-112
Doc Type: No Doc Update
Doc Text:
Clone Of:
: 2254345 (view as bug list)
Environment:
Last Closed: 2024-03-19 15:29:37 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github rook rook pull 13383 0 None open external: fix the run as a user flag 2023-12-13 08:27:39 UTC
Red Hat Product Errata RHSA-2024:1383 0 None None None 2024-03-19 15:29:45 UTC

Description Heðin 2023-12-12 11:27:46 UTC

Comment 2 Heðin 2023-12-12 11:51:58 UTC
In odf-4.13 the --run-as-user parameter was honoured, this is no longer the case in 4.14

How to reproduce:
1. Install ocp and odf-4.14
2. Get the 4.14 prep script ```oc get csv -n openshift-storage ocs-operator.v4.14.1-rhodf -o yaml | yq '.metadata.annotations."external.features.ocs.openshift.io/export-script"' | base64 -d > ceph-external-cluster-details-exporter-414.py```
3. Run the prep script on your test ceph cluster, with the following parameters:
```
python create-external-cluster-resources-414.py \
  --run-as-user client.ocphealthcheck.ocp-lab-cluster-1 \
  --cluster-name ocp-lab-cluster-1 \
  --rgw-pool-prefix ocp-lab-cluster-1 \
  --rgw-realm-name ocp-lab-cluster-1 \
  --rgw-zonegroup-name ocp-lab-cluster-1 \
  --rgw-zone-name ocp-lab-cluster-1 \
  --restricted-auth-permission true \
  --cephfs-filesystem-name ocp-lab-cluster-1 \
  --cephfs-metadata-pool-name cephfs.ocp-lab-cluster-1.metadata \
  --cephfs-data-pool-name cephfs.ocp-lab-cluster-1.data \
  --rbd-data-pool-name rbd.ocp-lab-cluster-1 \
  --alias-rbd-data-pool-name rbd-ocp-lab-cluster-1 \
  --rados-namespace ocp-lab-cluster-1 \
  --rgw-endpoint <fqdn>:443 \
  --rgw-tls-cert-path lets-encrypt-chain.pem \
  --monitoring-endpoint <monitoring ip's> \
  --format json
```
4. check what keyring has been created:
```
# ceph auth ls|grep health
client.healthchecker
```

The expected result is that a keyring is created with the name `client.ocphealthcheck.ocp-lab-cluster-1`

Comment 10 Vijay Avuthu 2024-01-17 03:23:15 UTC
1. build: ocs-registry:4.15.0-113

2024-01-16 12:13:18  06:43:17 - MainThread - ocs_ci.utility.connection - INFO  - Executing cmd: python3 /tmp/external-cluster-details-exporter-cf1qkgj9.py --rbd-data-pool-name rbd --rgw-endpoint 1x.x.xxx.xx:80 --run-as-user client.ocphealthcheck.ocp-lab-cluster-1 on 1x.x.xxx.xx
2024-01-16 12:13:20  06:43:20 - MainThread - ocs_ci.utility.connection - INFO  - Executing cmd: ceph auth get client.admin on 1x.x.xxx.xx

from ceph side

# ceph auth ls | grep -i client.ocphealthcheck.ocp-lab-cluster-1 -A4
client.ocphealthcheck.ocp-lab-cluster-1
	key: AQCGJaZlSKEJGRAA3zNJjybke2A17OTrMlO8tg==
	caps: [mgr] allow command config
	caps: [mon] allow r, allow command quorum_status, allow command version
	caps: [osd] profile rbd-read-only, allow rwx pool=default.rgw.meta, allow r pool=.rgw.root, allow rw pool=default.rgw.control, allow rx pool=default.rgw.log, allow x pool=default.rgw.buckets.index

job: https://url.corp.redhat.com/96cdffb
logs: https://url.corp.redhat.com/549c8af

Comment 12 Vijay Avuthu 2024-01-18 07:14:07 UTC
2. with --run-as-user client.healthchecker

build: ocs-registry:4.15.0-119

2024-01-17 23:47:53  18:17:53 - MainThread - ocs_ci.utility.connection - INFO  - Executing cmd: python3 /tmp/external-cluster-details-exporter-k8c1g4dp.py --rbd-data-pool-name rbd --rgw-endpoint 1x.x.xxx.xxx:80 --run-as-user client.healthchecker on 1x.x.xxx.xxx
2024-01-17 23:47:56  18:17:56 - MainThread - ocs_ci.utility.connection - INFO  - Executing cmd: ceph auth get client.admin on 1x.x.xxx.xxx

# ceph auth ls | grep -i client.healthchecker -A4
client.healthchecker
	key: AQDSGahlkup3EBAAR8jbZfRR4pJ4uB6AL79xFg==
	caps: [mgr] allow command config
	caps: [mon] allow r, allow command quorum_status, allow command version
	caps: [osd] profile rbd-read-only, allow rwx pool=default.rgw.meta, allow r pool=.rgw.root, allow rw pool=default.rgw.control, allow rx pool=default.rgw.log, allow x pool=default.rgw.buckets.index
# 

job: https://url.corp.redhat.com/34db56f
logs: https://url.corp.redhat.com/d5ec1e4

3. without --run-user

build: ocs-registry:4.15.0-120

2024-01-18 12:25:12  06:55:12 - MainThread - ocs_ci.utility.connection - INFO  - Executing cmd: python3 /tmp/external-cluster-details-exporter-95oqbtoi.py --rbd-data-pool-name rbd --rgw-endpoint 1x.x.xxx.xxx:80 on 1x.x.xxx.xxx
2024-01-18 12:25:15  06:55:14 - MainThread - ocs_ci.utility.connection - INFO  - Executing cmd: ceph auth get client.admin on 1x.x.xxx.xxx
2024-01-18 12:25:15  06:55:15 - MainThread - ocs_ci.utility.templating - INFO  - apiVersion: v1

# ceph auth ls | grep -i client.healthchecker -A4
client.healthchecker
	key: AQBRy6hlboFPEBAAX3SHZ2JwNot3S4xAEy88FA==
	caps: [mgr] allow command config
	caps: [mon] allow r, allow command quorum_status, allow command version
	caps: [osd] profile rbd-read-only, allow rwx pool=default.rgw.meta, allow r pool=.rgw.root, allow rw pool=default.rgw.control, allow rx pool=default.rgw.log, allow x pool=default.rgw.buckets.index
# 

job: https://url.corp.redhat.com/cb9d648
logs: https://url.corp.redhat.com/1a2f242

Comment 13 Jenifer Abrams 2024-01-19 20:56:41 UTC
Hi, is there a 4.14 backport open for this fix? (I can't see the clone BZ 2254345)

Because I just hit this same issue trying to set up MetroDR on 4.14, which is now blocking progress since trying to workaround --run-as-user hits other secret issues later on, see BZ2259033

Comment 16 errata-xmlrpc 2024-03-19 15:29:37 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.15.0 security, enhancement, & bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2024:1383


Note You need to log in before you can comment on or make changes to this bug.