Bug 2092417 - [MS v2] Default storageclassclaims are not created in upgraded clusters
Summary: [MS v2] Default storageclassclaims are not created in upgraded clusters
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: ocs-operator
Version: 4.11
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: ODF 4.11.0
Assignee: Subham Rai
QA Contact: Jilju Joy
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-06-01 13:55 UTC by Jilju Joy
Modified: 2024-04-05 17:02 UTC (History)
5 users (show)

Fixed In Version: 4.11.0-124
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2024-04-05 17:02:33 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github red-hat-storage ocs-operator pull 1729 0 None open odf-to-odf: create default scc when updgrading 2022-06-22 08:31:04 UTC
Github red-hat-storage ocs-operator pull 1752 0 None open Bug 2092417: [release-4.11] odf-to-odf: create default scc when updgrading 2022-07-18 12:24:50 UTC
Github red-hat-storage ocs-operator pull 1753 0 None open odf-to-odf: wrong parameter in storageclass 2022-07-18 12:31:05 UTC
Github red-hat-storage ocs-operator pull 1754 0 None open Bug Bug 2092417: [release-4.11] odf-to-odf: wrong parameter in storageclass 2022-07-18 13:14:53 UTC

Description Jilju Joy 2022-06-01 13:55:00 UTC
Description of problem (please be detailed as possible and provide log
snippests):
Default storageclassclaims not created for consumer cluster upgraded to ODF 4.11.
Expected to have 2 storageclassclaims for each of the default storage class ocs-storagecluster-ceph-rbd and ocs-storagecluster-cephfs

 
Logs collected after upgrade to ODF 4.11.0 - http://magna002.ceph.redhat.com/ocsci-jenkins/openshift-clusters/jijoy-j1-c1/jijoy-j1-c1_20220601T043509/logs/testcases_1654079214/

========================================================
Version of all relevant components (if applicable):

$ oc get csv
NAME                                      DISPLAY                       VERSION           REPLACES                                  PHASE
mcg-operator.v4.11.0                      NooBaa Operator               4.11.0            mcg-operator.v4.10.2                      Succeeded
ocs-operator.v4.11.0                      OpenShift Container Storage   4.11.0            ocs-operator.v4.10.2                      Succeeded
ocs-osd-deployer.v2.0.2                   OCS OSD Deployer              2.0.2             ocs-osd-deployer.v2.0.1                   Succeeded
odf-csi-addons-operator.v4.11.0           CSI Addons                    4.11.0            odf-csi-addons-operator.v4.10.2           Succeeded
odf-operator.v4.11.0                      OpenShift Data Foundation     4.11.0            odf-operator.v4.10.2                      Succeeded
ose-prometheus-operator.4.10.0            Prometheus Operator           4.10.0            ose-prometheus-operator.4.8.0             Succeeded
route-monitor-operator.v0.1.418-6459408   Route Monitor Operator        0.1.418-6459408   route-monitor-operator.v0.1.408-c2256a2   Succeeded


$ oc get clusterversion
NAME      VERSION   AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.10.15   True        False         6h1m    Cluster version is 4.10.15



$ oc get csv odf-operator.v4.11.0 -o yaml | grep full_version
    full_version: 4.11.0-85

========================================================


Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
 Default storageclassclaims are 

Is there any workaround available to the best of your knowledge?


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
1

Can this issue reproducible?
Yes

Can this issue reproduce from the UI?


If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1. Create provider and consumer cluster with ODF version 4.10
2. Upgrade provider and consumer to ODF 4.11


Actual results:
Default storageclassclaims are not present after upgrade to 4.11

Expected results:
Expected to have 2 storageclassclaim for each of the default storage class ocs-storagecluster-ceph-rbd and ocs-storagecluster-cephfs

Additional info:

Comment 4 Jilju Joy 2022-07-15 14:32:52 UTC
Note:
Verify the presence of the default storageclassclaims after upgrade to 4.11
Verify that the volumesnapshotclasses are created. Fixed bug #2092417 to create volumesnapshotclass per storageclassclaim. When the default storageclassclaims are created, the volumesnapshotclasses will be created automatically. Verify volume snapshot and restore is working.

Comment 5 Jilju Joy 2022-08-03 06:33:55 UTC
Verification blocked due to the bug #2114678

Comment 6 Jilju Joy 2022-08-08 10:20:09 UTC
Verified in version:
ODF 4.11.0-131
OCP 4.10.24

$ oc get csv
NAME                                      DISPLAY                       VERSION           REPLACES                                  PHASE
mcg-operator.v4.11.0                      NooBaa Operator               4.11.0            mcg-operator.v4.10.5                      Succeeded
ocs-operator.v4.11.0                      OpenShift Container Storage   4.11.0            ocs-operator.v4.10.5                      Succeeded
ocs-osd-deployer.v2.0.4                   OCS OSD Deployer              2.0.4             ocs-osd-deployer.v2.0.3                   Succeeded
odf-csi-addons-operator.v4.11.0           CSI Addons                    4.11.0            odf-csi-addons-operator.v4.10.5           Succeeded
odf-operator.v4.11.0                      OpenShift Data Foundation     4.11.0            odf-operator.v4.10.4                      Succeeded
ose-prometheus-operator.4.10.0            Prometheus Operator           4.10.0            ose-prometheus-operator.4.8.0             Succeeded
route-monitor-operator.v0.1.422-151be96   Route Monitor Operator        0.1.422-151be96   route-monitor-operator.v0.1.420-b65f47e   Succeeded



Default storageclassclaims are created after upgrading the consumer cluster to 4.11.0.

$ oc get storageclassclaim
NAME                          STORAGETYPE        PHASE
ocs-storagecluster-ceph-rbd   blockpool          Ready
ocs-storagecluster-cephfs     sharedfilesystem   Ready


$ oc get storageclassclaim -o yaml
apiVersion: v1
items:
- apiVersion: ocs.openshift.io/v1alpha1
  kind: StorageClassClaim
  metadata:
    creationTimestamp: "2022-08-08T08:31:27Z"
    finalizers:
    - storageclassclaim.ocs.openshift.io
    generation: 1
    labels:
      storageclassclaim.ocs.openshift.io/default: "true"
    name: ocs-storagecluster-ceph-rbd
    namespace: openshift-storage
    ownerReferences:
    - apiVersion: ocs.openshift.io/v1
      kind: StorageCluster
      name: ocs-storagecluster
      uid: 1164180b-855c-4bfb-913c-bc86b8578baf
    resourceVersion: "238118"
    uid: af5fd773-9c4e-4dcf-9e18-5561709f16a0
  spec:
    type: blockpool
  status:
    phase: Ready
- apiVersion: ocs.openshift.io/v1alpha1
  kind: StorageClassClaim
  metadata:
    creationTimestamp: "2022-08-08T08:31:27Z"
    finalizers:
    - storageclassclaim.ocs.openshift.io
    generation: 1
    labels:
      storageclassclaim.ocs.openshift.io/default: "true"
    name: ocs-storagecluster-cephfs
    namespace: openshift-storage
    ownerReferences:
    - apiVersion: ocs.openshift.io/v1
      kind: StorageCluster
      name: ocs-storagecluster
      uid: 1164180b-855c-4bfb-913c-bc86b8578baf
    resourceVersion: "237840"
    uid: 2ecf1836-01a7-4d24-a78f-c753f53afcf6
  spec:
    type: sharedfilesystem
  status:
    phase: Ready
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""


$ oc get sc
NAME                          PROVISIONER                             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
gp2                           kubernetes.io/aws-ebs                   Delete          WaitForFirstConsumer   true                   3h36m
gp2-csi                       ebs.csi.aws.com                         Delete          WaitForFirstConsumer   true                   3h34m
gp3 (default)                 ebs.csi.aws.com                         Delete          WaitForFirstConsumer   true                   3h36m
gp3-csi                       ebs.csi.aws.com                         Delete          WaitForFirstConsumer   true                   3h34m
ocs-storagecluster-ceph-rbd   openshift-storage.rbd.csi.ceph.com      Delete          Immediate              true                   105m
ocs-storagecluster-cephfs     openshift-storage.cephfs.csi.ceph.com   Delete          Immediate              true                   105m


Also verified that the default volumesnapshotclasses are created (bug #2092417). Tested volume snapshot and restore.
$ oc get volumesnapshotclass
NAME                          DRIVER                                  DELETIONPOLICY   AGE
csi-aws-vsc                   ebs.csi.aws.com                         Delete           3h36m
ocs-storagecluster-ceph-rbd   openshift-storage.rbd.csi.ceph.com      Delete           107m
ocs-storagecluster-cephfs     openshift-storage.cephfs.csi.ceph.com   Delete           107m


$ oc get volumesnapshotclass ocs-storagecluster-ceph-rbd ocs-storagecluster-cephfs -o yaml
apiVersion: v1
items:
- apiVersion: snapshot.storage.k8s.io/v1
  deletionPolicy: Delete
  driver: openshift-storage.rbd.csi.ceph.com
  kind: VolumeSnapshotClass
  metadata:
    annotations:
      ocs.openshift.io.storagesclassclaim: openshift-storage/ocs-storagecluster-ceph-rbd
    creationTimestamp: "2022-08-08T08:31:39Z"
    generation: 1
    name: ocs-storagecluster-ceph-rbd
    resourceVersion: "238116"
    uid: 0ad9d4bf-9ad8-43be-9a26-04cb5fb9a03c
  parameters:
    clusterID: openshift-storage
    csi.storage.k8s.io/snapshotter-secret-name: rook-ceph-client-29f4710e2768d92b7ef3490a96cecb02
    csi.storage.k8s.io/snapshotter-secret-namespace: openshift-storage
- apiVersion: snapshot.storage.k8s.io/v1
  deletionPolicy: Delete
  driver: openshift-storage.cephfs.csi.ceph.com
  kind: VolumeSnapshotClass
  metadata:
    annotations:
      ocs.openshift.io.storagesclassclaim: openshift-storage/ocs-storagecluster-cephfs
    creationTimestamp: "2022-08-08T08:31:34Z"
    generation: 1
    name: ocs-storagecluster-cephfs
    resourceVersion: "237838"
    uid: d15635c3-0909-421b-92f6-ed8a9f5568de
  parameters:
    clusterID: e9c26c5b5c58f4f522d715929a2187e7
    csi.storage.k8s.io/snapshotter-secret-name: rook-ceph-client-26f7c7294dfa096f4b2b470dd3c1b7e7
    csi.storage.k8s.io/snapshotter-secret-namespace: openshift-storage
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""


Note You need to log in before you can comment on or make changes to this bug.