Description of problem: Even with a StorageCluster CR that intends to avoid deployment of block/file, CSI components are deployed none the less. Excerpt from StorageCluster CR: spec: managedResources: cephConfig: {} cephBlockPools: reconcileStrategy: ignore cephFilesystems: reconcileStrategy: ignore Version-Release number of selected component (if applicable): 1234525eade42626855bd2fc38abb366eb70bad9 How reproducible: Easily Steps to Reproduce: 1. Create StorageCluster with above excerpt 2. Look at pods/services/deployments Actual results: svc/csi-cephfsplugin-metrics - 172.30.249.33 ports 8080->9081, 8081->9091 deployment/csi-cephfsplugin-provisioner deploys k8s.gcr.io/sig-storage/csi-attacher:v3.0.2,k8s.gcr.io/sig-storage/csi-snapshotter:v3.0.2,k8s.gcr.io/sig-storage/csi-resizer:v1.0.1,k8s.gcr.io/sig-storage/csi-provisioner:v2.0.4,quay.io/cephcsi/cephcsi:v3.3.0,quay.io/cephcsi/cephcsi:v3.3.0 deployment #1 running for about an hour - 2 pods daemonset/csi-cephfsplugin manages k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1,quay.io/cephcsi/cephcsi:v3.3.0,quay.io/cephcsi/cephcsi:v3.3.0 generation #1 running for about an hour - 3 pods svc/csi-rbdplugin-metrics - 172.30.117.155 ports 8080->9080, 8081->9090 deployment/csi-rbdplugin-provisioner deploys k8s.gcr.io/sig-storage/csi-provisioner:v2.0.4,k8s.gcr.io/sig-storage/csi-resizer:v1.0.1,k8s.gcr.io/sig-storage/csi-attacher:v3.0.2,k8s.gcr.io/sig-storage/csi-snapshotter:v3.0.2,quay.io/cephcsi/cephcsi:v3.3.0,quay.io/cephcsi/cephcsi:v3.3.0 deployment #1 running for about an hour - 2 pods daemonset/csi-rbdplugin manages k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1,quay.io/cephcsi/cephcsi:v3.3.0,quay.io/cephcsi/cephcsi:v3.3.0 generation #1 running for about an hour - 3 pods Expected results: No CSI components should be deployed.
Ultimately this is an optimization in Rook-Ceph. Their current policy is to always deploy the CSI drivers when at least one CephCluster exists. As mentioned they are fairly lightweight, but if the change in Rook is trivial it may be valid to implement anyway. That said, this is decidedly an optimization RFE. Moving to ODF 4.9.
Even if the policy is for OCS to ignore reconciling the blockpool and filesystem, the admin could still require the CSI driver if they create those CRs separately from OCS. The CSI drivers can be disabled by setting these two variables in the rook-ceph-operator-config configmap in the openshift-storage namespace: ROOK_CSI_ENABLE_CEPHFS: "false" ROOK_CSI_ENABLE_RBD: "false" See https://github.com/rook/rook/blob/master/cluster/examples/kubernetes/ceph/operator-openshift.yaml#L105-L108 I believe there is already a documentation topic that indicates how to update settings in this configmap, so we could add this to that documentation topic. It doesn't seem common enough and is advanced enough that leaving it in the docs seems reasonable. Otherwise, we would need an option in the OCS operator that would update the configmap, which seems like overkill. So moving to documentation.
I see, there is not a topic dedicated only to the configmap customization. So we just need to have similar examples/commands for setting the suggested values in the applicable section. Kyle, where would you recommend this section be added?