Bug 1975528 - [RFE] Don't deploy CSI pods when cephFilesystems/cephBlockPools.reconcileStrategy = ignore [NEEDINFO]
Summary: [RFE] Don't deploy CSI pods when cephFilesystems/cephBlockPools.reconcileStra...
Keywords:
Status: ASSIGNED
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: documentation
Version: unspecified
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: Anjana Suparna Sriram
QA Contact: Elad
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-06-23 20:43 UTC by Kyle Bader
Modified: 2023-08-09 16:43 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed:
Embargoed:
olakra: needinfo? (kbader)
olakra: needinfo? (nberry)
tnielsen: needinfo? (kbader)


Attachments (Terms of Use)

Description Kyle Bader 2021-06-23 20:43:54 UTC
Description of problem:

Even with a StorageCluster CR that intends to avoid deployment of block/file, CSI components are deployed none the less. Excerpt from StorageCluster CR:

spec:
  managedResources:
    cephConfig: {}
    cephBlockPools:
      reconcileStrategy: ignore
    cephFilesystems:
      reconcileStrategy: ignore

Version-Release number of selected component (if applicable):

1234525eade42626855bd2fc38abb366eb70bad9

How reproducible:

Easily


Steps to Reproduce:
1. Create StorageCluster with above excerpt
2. Look at pods/services/deployments

Actual results:

svc/csi-cephfsplugin-metrics - 172.30.249.33 ports 8080->9081, 8081->9091
  deployment/csi-cephfsplugin-provisioner deploys k8s.gcr.io/sig-storage/csi-attacher:v3.0.2,k8s.gcr.io/sig-storage/csi-snapshotter:v3.0.2,k8s.gcr.io/sig-storage/csi-resizer:v1.0.1,k8s.gcr.io/sig-storage/csi-provisioner:v2.0.4,quay.io/cephcsi/cephcsi:v3.3.0,quay.io/cephcsi/cephcsi:v3.3.0
    deployment #1 running for about an hour - 2 pods
  daemonset/csi-cephfsplugin manages k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1,quay.io/cephcsi/cephcsi:v3.3.0,quay.io/cephcsi/cephcsi:v3.3.0
    generation #1 running for about an hour - 3 pods

svc/csi-rbdplugin-metrics - 172.30.117.155 ports 8080->9080, 8081->9090
  deployment/csi-rbdplugin-provisioner deploys k8s.gcr.io/sig-storage/csi-provisioner:v2.0.4,k8s.gcr.io/sig-storage/csi-resizer:v1.0.1,k8s.gcr.io/sig-storage/csi-attacher:v3.0.2,k8s.gcr.io/sig-storage/csi-snapshotter:v3.0.2,quay.io/cephcsi/cephcsi:v3.3.0,quay.io/cephcsi/cephcsi:v3.3.0
    deployment #1 running for about an hour - 2 pods
  daemonset/csi-rbdplugin manages k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.0.1,quay.io/cephcsi/cephcsi:v3.3.0,quay.io/cephcsi/cephcsi:v3.3.0
    generation #1 running for about an hour - 3 pods

Expected results:

No CSI components should be deployed.

Comment 3 Jose A. Rivera 2021-06-29 16:13:28 UTC
Ultimately this is an optimization in Rook-Ceph. Their current policy is to always deploy the CSI drivers when at least one CephCluster exists. As mentioned they are fairly lightweight, but if the change in Rook is trivial it may be valid to implement anyway.

That said, this is decidedly an optimization RFE. Moving to ODF 4.9.

Comment 5 Travis Nielsen 2021-06-29 20:12:34 UTC
Even if the policy is for OCS to ignore reconciling the blockpool and filesystem, the admin could still require the CSI driver if they create those CRs separately from OCS. 

The CSI drivers can be disabled by setting these two variables in the rook-ceph-operator-config configmap in the openshift-storage namespace:
  ROOK_CSI_ENABLE_CEPHFS: "false"
  ROOK_CSI_ENABLE_RBD: "false"

See https://github.com/rook/rook/blob/master/cluster/examples/kubernetes/ceph/operator-openshift.yaml#L105-L108

I believe there is already a documentation topic that indicates how to update settings in this configmap, so we could add this to that documentation topic. It doesn't seem common enough and is advanced enough that leaving it in the docs seems reasonable. Otherwise, we would need an option in the OCS operator that would update the configmap, which seems like overkill. So moving to documentation.

Comment 11 Travis Nielsen 2022-01-31 22:10:58 UTC
I see, there is not a topic dedicated only to the configmap customization. So we just need to have similar examples/commands for setting the suggested values in the applicable section. Kyle, where would you recommend this section be added?


Note You need to log in before you can comment on or make changes to this bug.