Bug 2089347 - ODF to ODF: provider cluster should not have the default rbd and cephfs Storageclasses
Summary: ODF to ODF: provider cluster should not have the default rbd and cephfs Stora...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: odf-managed-service
Version: 4.10
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: ---
Assignee: Dhruv Bindra
QA Contact: Neha Berry
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-05-23 13:31 UTC by Neha Berry
Modified: 2023-08-09 17:00 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-09-28 06:49:06 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github red-hat-storage ocs-osd-deployer pull 194 0 None Merged Update provider storageCluster CR to disable OCS storageClasses 2022-06-16 06:20:22 UTC

Description Neha Berry 2022-05-23 13:31:20 UTC
Description of problem:
============================
In ODF to ODF provider cluster, one must not be allowed to create PVCs using the default rbd and cephfs SC which get created on ODF install

With deployer 2.0.0 , this issue was addressed via PR https://github.com/red-hat-storage/ocs-osd-deployer/pull/133/files , however the fix worked partially and below were the observations

1. the CSI pods(provisioner and plugin) are no longer created in the provider
2. the default SC and cephblockpool still exists

NAME                          PROVISIONER                             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
gp2 (default)                 kubernetes.io/aws-ebs                   Delete          WaitForFirstConsumer   true                   6h16m
gp2-csi                       ebs.csi.aws.com                         Delete          WaitForFirstConsumer   true                   6h16m
ocs-storagecluster-ceph-rbd   openshift-storage.rbd.csi.ceph.com      Delete          Immediate              true                   4h52m
ocs-storagecluster-cephfs     openshift-storage.cephfs.csi.ceph.com   Delete          Immediate              true                   4h52m

#oc get cephblockpool
cephblockpool-storageconsumer-35b116bc-7b87-4e20-922f-2a9af20a2563.yaml*
ocs-storagecluster-cephblockpool.yaml*

Though as part of this PR, both SC and CSI pods should not have got created

Version-Release number of selected component (if applicable):
=================================================================
deployer v2.0.0 onwards 

 oc get clusterversion
NAME      VERSION   AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.10.13   True        False         4d9h    Error while reconciling 4.10.13: the cluster operator insights is degraded
➜  ~ oc get csv -n openshift-storage -o json ocs-operator.v4.10.2 | jq '.metadata.labels["full_version"]'                                                                                                                      
"4.10.2-3"

➜  ~ echo "Deployer"; oc describe csv ocs-osd-deployer.v2.0.2|grep -i image
Deployer
    Mediatype:   image/svg+xml
                Image:  quay.io/openshift/origin-kube-rbac-proxy:4.10.0
                Image:             quay.io/osd-addons/ocs-osd-deployer:2.0.2-2
                Image:             quay.io/osd-addons/ocs-osd-deployer:2.0.2-2



How reproducible:
======================
Always


Steps to Reproduce:
=======================
1.IN ROSA for a ODF to ODF offering, install a provider cluster using :
A) Addon based deployment - install OCP and then install provder-qe addon
b) Install a managed services appliance mode provider using rosa create service

2. Check the presence of RBD and cephfs storageclasses
3.

Actual results:
======================
Till deployer 2.0.1 -> both SC present
From deployer 2.0.2 + ODF 4.10.2 -> cephFS SC present but RBD SC is not created as default pool "ocs-storagecluster-cephblockpool" is no longer created due to fix of Bug 2078715
Storagecluster stays in Error state - https://bugzilla.redhat.com/show_bug.cgi?id=2089296

Expected results:
========================
Default SC + CSI pods both should not get created


Additional info:
===========================
                   



oc get sc,cephblockpool,pods
NAME                                                      PROVISIONER                             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
storageclass.storage.k8s.io/gp2 (default)                 kubernetes.io/aws-ebs                   Delete          WaitForFirstConsumer   true                   4d9h
storageclass.storage.k8s.io/gp2-csi                       ebs.csi.aws.com                         Delete          WaitForFirstConsumer   true                   4d9h
storageclass.storage.k8s.io/gp3-csi                       ebs.csi.aws.com                         Delete          WaitForFirstConsumer   true                   4d9h
storageclass.storage.k8s.io/ocs-storagecluster-ceph-rbd   openshift-storage.rbd.csi.ceph.com      Delete          Immediate              true                   4d9h
storageclass.storage.k8s.io/ocs-storagecluster-cephfs     openshift-storage.cephfs.csi.ceph.com   Delete          Immediate              true                   4d9h

NAME                                                                                            AGE
cephblockpool.ceph.rook.io/cephblockpool-storageconsumer-5b716509-9514-499d-83b8-02653f013014   4d7h
cephblockpool.ceph.rook.io/cephblockpool-storageconsumer-c00d3d25-e898-4bbd-96a3-18465f0f7d46   4d7h
cephblockpool.ceph.rook.io/ocs-storagecluster-cephblockpool                                     4d9h

NAME                                                                  READY   STATUS      RESTARTS   AGE
pod/8b9812e03cb80021cb5974ee1876bd40009a23b678f9e6b1e3e63b98787l4fw   0/1     Completed   0          3d22h
pod/94963c92ac0decbfa826f438b1d9e5f555b46a84bda92fc43de6a5ed7a5w2sg   0/1     Completed   0          4d9h
pod/96f4060767c209c4e921c86f13b9ad5e1e84606de95ede4f448af2bdeekv7vp   0/1     Completed   0          4d9h
pod/addon-ocs-provider-qe-catalog-vcrnz                               1/1     Running     0          4d6h
pod/alertmanager-managed-ocs-alertmanager-0                           2/2     Running     0          4d9h
pod/alertmanager-managed-ocs-alertmanager-1                           2/2     Running     0          4d9h
pod/alertmanager-managed-ocs-alertmanager-2                           2/2     Running     0          4d9h
pod/csi-addons-controller-manager-7d99f546c9-bl9sn                    2/2     Running     0          3d22h
pod/ocs-metrics-exporter-5dcf6f88df-zlhsd                             1/1     Running     0          3d22h
pod/ocs-operator-5985b8b5f4-tn5ss                                     1/1     Running     0          3d22h
pod/ocs-osd-controller-manager-6b74c4cc67-4j5lr                       3/3     Running     0          3d22h
pod/ocs-provider-server-86d8bf774-5p67d                               1/1     Running     0          3d22h
pod/odf-console-58f6b6f5bb-lm95n                                      1/1     Running     0          3d22h
pod/odf-operator-controller-manager-584df64f8-mwpk6                   2/2     Running     0          3d22h
pod/prometheus-managed-ocs-prometheus-0                               3/3     Running     0          3d22h
pod/prometheus-operator-8547cc9f89-7mmbb                              1/1     Running     0          4d9h
pod/rook-ceph-crashcollector-ip-10-0-131-67.ec2.internal-bd6b5ll5q9   1/1     Running     0          3d22h
pod/rook-ceph-crashcollector-ip-10-0-159-28.ec2.internal-7d488dnhd8   1/1     Running     0          3d22h
pod/rook-ceph-crashcollector-ip-10-0-173-158.ec2.internal-56559j565   1/1     Running     0          3d22h
pod/rook-ceph-mds-ocs-storagecluster-cephfilesystem-a-865579858m9s9   2/2     Running     0          4d9h
pod/rook-ceph-mds-ocs-storagecluster-cephfilesystem-b-6869bf77p2tgc   2/2     Running     0          4d9h
pod/rook-ceph-mgr-a-5ff964d8b-msfxg                                   2/2     Running     0          4d9h
pod/rook-ceph-mon-a-5d766555cb-k7d7m                                  2/2     Running     0          4d9h
pod/rook-ceph-mon-b-656774f7dc-88bh9                                  2/2     Running     0          4d9h
pod/rook-ceph-mon-c-5bcb75c5b5-p5mkn                                  2/2     Running     0          4d9h
pod/rook-ceph-operator-5678fcf74-p72j6                                1/1     Running     0          3d22h
pod/rook-ceph-osd-0-6b57fbff84-b6svt                                  2/2     Running     0          3d22h
pod/rook-ceph-osd-1-646698bddc-zr8tk                                  2/2     Running     0          3d22h
pod/rook-ceph-osd-10-b77dc4df4-q464f                                  2/2     Running     0          3d22h
pod/rook-ceph-osd-11-85dff77676-j4f6c                                 2/2     Running     0          3d22h
pod/rook-ceph-osd-12-64cfb98575-zkczt                                 2/2     Running     0          3d22h
pod/rook-ceph-osd-13-7c449bbccd-stwsq                                 2/2     Running     0          3d22h
pod/rook-ceph-osd-14-5594c64997-ndx4r                                 2/2     Running     0          3d22h
pod/rook-ceph-osd-2-685c5699db-w6g4b                                  2/2     Running     0          3d22h
pod/rook-ceph-osd-3-7d7884d954-lpm4d                                  2/2     Running     0          3d22h
pod/rook-ceph-osd-4-7ff64fbbdc-lx4qv                                  2/2     Running     0          3d22h
pod/rook-ceph-osd-5-54896756d8-gjt9v                                  2/2     Running     0          3d22h
pod/rook-ceph-osd-6-74cb644645-xx7s4                                  2/2     Running     0          3d22h
pod/rook-ceph-osd-7-787cdbc4b-kgh7q                                   2/2     Running     0          3d22h
pod/rook-ceph-osd-8-66576858-gx8kn                                    2/2     Running     0          3d22h
pod/rook-ceph-osd-9-85cbf74b95-lh6hk                                  2/2     Running     0          3d22h


Note You need to log in before you can comment on or make changes to this bug.