Bug 1996829 - Permissions assigned to ceph auth principals when using external storage are too broad
Summary: Permissions assigned to ceph auth principals when using external storage are...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: rook
Version: 4.8
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ODF 4.11.0
Assignee: Parth Arora
QA Contact: Vijay Avuthu
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-08-23 18:48 UTC by Lars Kellogg-Stedman
Modified: 2023-08-09 17:03 UTC (History)
13 users (show)

Fixed In Version: 4.10.0-113
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-08-24 13:48:17 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github rook rook pull 8994/ 0 None None None 2021-11-08 16:30:44 UTC
Red Hat Product Errata RHSA-2022:6156 0 None None None 2022-08-24 13:48:51 UTC

Description Lars Kellogg-Stedman 2021-08-23 18:48:46 UTC
The `ceph-external-cluster-details-exporter.py` creates several authentication principals in the target cluster. The permissions assigned to these principals are too broad; they should be scoped to the specific resources that will be used by the cluster.

That is, instead of:

    ceph auth add client.csi-rbd-provisioner \
            mgr "allow rw" \
            mon "profile rbd" \
            osd "profile rbd"

We should be creating something like:

    ceph auth add client.csi-rbd-provisioner \
            mgr "allow rw" \
            mon "profile rbd" \
            osd "profile rbd pool=mycluster-rbd-pool"

When the external Ceph cluster provides service to clients other than
OpenShift cluster, the broad default permissions pose both a security and
maintenance risk (e.g., an administrator accidentally deleting objects
from the wrong rbd pool).

Comment 2 Rejy M Cyriac 2021-09-06 14:57:06 UTC
Based on request from engineering, the 'installation' component has been deprecated

Comment 4 Travis Nielsen 2021-09-27 15:22:01 UTC
Parth Can you take a look?

Comment 5 Sébastien Han 2021-11-15 11:34:12 UTC
Part of https://github.com/red-hat-storage/rook/tree/release-4.10

Comment 11 Mudit Agarwal 2022-03-03 09:58:39 UTC
Please add doc text

Comment 12 Parth Arora 2022-03-07 05:06:25 UTC
As decided we will not be exposing this feature to customers by any documentation, because of QE and doc team limitations at this point of time.
And it is okay to keep it in the build because it's an optional feature.
Thanks!

Comment 14 Mudit Agarwal 2022-03-29 08:44:50 UTC
Moving to 4.11 as the verification is still pending for the core product

Comment 21 Vijay Avuthu 2022-07-15 08:09:35 UTC
Verified with ocs-registry:4.11.0-113

Job: https://ocs4-jenkins-csb-odf-qe.apps.ocp-c1.prod.psi.redhat.com/job/qe-deploy-ocs-cluster/14657/console

2022-07-15 12:24:33  06:54:33 - MainThread - ocs_ci.utility.connection - INFO  - Executing cmd: python3 /tmp/external-cluster-details-exporter-hdkjadkg.py --rbd-data-pool-name rbd --rgw-endpoint 10.1.xxx.xx7:8080 --cluster-name vavuthu2-1996829 --cephfs-filesystem-name cephfs --restricted-auth-permission true on 10.1.xxx.xx9

csi users:
==========

client.csi-cephfs-node-vavuthu2-1996829-cephfs
	key: AQApD9FiAo8pFBAA7nUkaEgvgeSupWvsZvkfOg==
	caps: [mds] allow rw
	caps: [mgr] allow rw
	caps: [mon] allow r, allow command 'osd blocklist'
	caps: [osd] allow rw tag cephfs *=cephfs

client.csi-cephfs-provisioner-vavuthu2-1996829-cephfs
	key: AQApD9FiODzTFBAAiv17o1f8rPClrQz8jXjZpQ==
	caps: [mgr] allow rw
	caps: [mon] allow r, allow command 'osd blocklist'
	caps: [osd] allow rw tag cephfs metadata=cephfs


client.csi-rbd-node-vavuthu2-1996829-rbd
	key: AQApD9FiGf3cEhAAF5r1AI5uJkP5LzkNZa3WDg==
	caps: [mon] profile rbd, allow command 'osd blocklist'
	caps: [osd] profile rbd pool=rbd




client.csi-rbd-provisioner-vavuthu2-1996829-rbd
	key: AQApD9FimUGDExAA36F7aTLzJzsvQnCFveVPzQ==
	caps: [mgr] allow rw
	caps: [mon] profile rbd, allow command 'osd blocklist'
	caps: [osd] profile rbd pool=rbd


$ oc -n openshift-storage get StorageClass ocs-external-storagecluster-cephfs -n openshift-storage -o yaml
allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
.
.
.
parameters:
  clusterID: openshift-storage
  csi.storage.k8s.io/controller-expand-secret-name: rook-csi-cephfs-provisioner-vavuthu2-1996829-cephfs
  csi.storage.k8s.io/controller-expand-secret-namespace: openshift-storage
  csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node-vavuthu2-1996829-cephfs
  csi.storage.k8s.io/node-stage-secret-namespace: openshift-storage
  csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner-vavuthu2-1996829-cephfs
  csi.storage.k8s.io/provisioner-secret-namespace: openshift-storage
  fsName: cephfs
  pool: cephfs_data
provisioner: openshift-storage.cephfs.csi.ceph.com
reclaimPolicy: Delete
volumeBindingMode: Immediate

$ oc -n openshift-storage get StorageClass ocs-external-storagecluster-ceph-rbd -n openshift-storage -o yaml
allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
.
.
.
parameters:
  clusterID: openshift-storage
  csi.storage.k8s.io/controller-expand-secret-name: rook-csi-rbd-provisioner-vavuthu2-1996829-rbd
  csi.storage.k8s.io/controller-expand-secret-namespace: openshift-storage
  csi.storage.k8s.io/fstype: ext4
  csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node-vavuthu2-1996829-rbd
  csi.storage.k8s.io/node-stage-secret-namespace: openshift-storage
  csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner-vavuthu2-1996829-rbd
  csi.storage.k8s.io/provisioner-secret-namespace: openshift-storage
  imageFeatures: layering,deep-flatten,exclusive-lock,object-map,fast-diff
  imageFormat: "2"
  pool: rbd
provisioner: openshift-storage.rbd.csi.ceph.com
reclaimPolicy: Delete
volumeBindingMode: Immediate

Comment 23 errata-xmlrpc 2022-08-24 13:48:17 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.11.0 security, enhancement, & bugfix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:6156


Note You need to log in before you can comment on or make changes to this bug.