Description of problem (please be detailed as possible and provide log snippests): Red Hat OpenShift Virtualization 4.15 has support for detecting a default StorageClass for virtualization workloads. This StorageClass can be different from the 'normal' default StorageClass, OCS Operator creates the ODF virtualization StorageClass, and should annotate it with storageclass.kubevirt.io/is-default-virt-class: true so that Red Hat OpenShift Virtualization uses it. Version of all relevant components (if applicable): Compatible versions with Red Hat OpenShift Virtualization 4.15 -> ODF-4.14. Does this issue impact your ability to continue to work with the product (please explain in detail what is the user impact)? No, but it is a huge user experience enhancement which will prevent issues when customers use a non-virtualization ODF StorageClass. Several outages have been reported because of it. Is there any workaround available to the best of your knowledge? Yes, customers need to select the right StorageClass themselves. However, this has proven to be difficult to make happen. Customers take the easiest way (not reading the complete documentation), and that causes troubles at the moment. Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? 1. Can this issue reproducible? Yes. Can this issue reproduce from the UI? Yes. If this is a regression, please provide more details to justify this: No regression. Expected results: The ODF StorageClass for virtualization should have the storageclass.kubevirt.io/is-default-virt-class: true annotation. The StorageClass is only created when Red Hat OpenShift Virtualization (or KubeVirt) is installed. Additional info: Upstream PR from a KubeVirt developer: https://github.com/red-hat-storage/ocs-operator/pull/2213
Deployed odf-operator version 4.14.0-rhodf we don't have there `ocs-storagecluster-ceph-rbd-virtualization` storage class but we have `ocs-storagecluster-ceph-rbd with` with annotation and `mapOptions: krbd:rxbounce` Alex is fixing this, moving to Assigned
Looks like it's fixed: $ oc get sc ocs-storagecluster-ceph-rbd-virtualization -oyaml allowVolumeExpansion: true apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: description: Provides RWO and RWX Block volumes suitable for Virtual Machine disks storageclass.kubernetes.io/is-default-class: "false" storageclass.kubevirt.io/is-default-virt-class: "true" creationTimestamp: "2023-10-31T11:23:05Z" name: ocs-storagecluster-ceph-rbd-virtualization resourceVersion: "69636" uid: e971c94f-d0a7-4ead-8e70-fc8f3c9f0e10 parameters: clusterID: openshift-storage csi.storage.k8s.io/controller-expand-secret-name: rook-csi-rbd-provisioner csi.storage.k8s.io/controller-expand-secret-namespace: openshift-storage csi.storage.k8s.io/fstype: ext4 csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node csi.storage.k8s.io/node-stage-secret-namespace: openshift-storage csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner csi.storage.k8s.io/provisioner-secret-namespace: openshift-storage imageFeatures: layering,deep-flatten,exclusive-lock,object-map,fast-diff imageFormat: "2" mapOptions: krbd:rxbounce mounter: rbd pool: ocs-storagecluster-cephblockpool provisioner: openshift-storage.rbd.csi.ceph.com reclaimPolicy: Delete volumeBindingMode: Immediate $ oc get sc ocs-storagecluster-ceph-rbd -oyaml allowVolumeExpansion: true apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: description: Provides RWO Filesystem volumes, and RWO and RWX Block volumes storageclass.kubernetes.io/is-default-class: "false" creationTimestamp: "2023-10-31T11:16:59Z" name: ocs-storagecluster-ceph-rbd resourceVersion: "69634" uid: 47532e8e-c413-4022-9b3f-0dea16295db9 parameters: clusterID: openshift-storage csi.storage.k8s.io/controller-expand-secret-name: rook-csi-rbd-provisioner csi.storage.k8s.io/controller-expand-secret-namespace: openshift-storage csi.storage.k8s.io/fstype: ext4 csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node csi.storage.k8s.io/node-stage-secret-namespace: openshift-storage csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner csi.storage.k8s.io/provisioner-secret-namespace: openshift-storage imageFeatures: layering,deep-flatten,exclusive-lock,object-map,fast-diff imageFormat: "2" pool: ocs-storagecluster-cephblockpool provisioner: openshift-storage.rbd.csi.ceph.com reclaimPolicy: Delete volumeBindingMode: Immediate
Verified on ODF v4.14.0-157 $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.14.0-rc.7 True False 6d Cluster version is 4.14.0-rc.7 $ oc get csv -n openshift-storage --show-labels NAME DISPLAY VERSION REPLACES PHASE LABELS mcg-operator.v4.14.0-rhodf NooBaa Operator 4.14.0-rhodf Succeeded operators.coreos.com/mcg-operator.openshift-storage= ocs-operator.v4.14.0-rhodf OpenShift Container Storage 4.14.0-rhodf Succeeded full_version=4.14.0-157,operatorframework.io/arch.amd64=supported,operatorframework.io/arch.ppc64le=supported,operatorframework.io/arch.s390x=supported,operators.coreos.com/ocs-operator.openshift-storage= odf-csi-addons-operator.v4.14.0-rhodf CSI Addons 4.14.0-rhodf Succeeded operators.coreos.com/odf-csi-addons-operator.openshift-storage= odf-operator.v4.14.0-rhodf OpenShift Data Foundation 4.14.0-rhodf Succeeded full_version=4.14.0-157,operatorframework.io/arch.amd64=supported,operatorframework.io/arch.ppc64le=supported,operatorframework.io/arch.s390x=supported,operators.coreos.com/odf-operator.openshift-storage= openshift-pipelines-operator-rh.v1.11.0 Red Hat OpenShift Pipelines 1.11.0 Succeeded olm.copiedFrom=openshift-operators,operatorframework.io/arch.amd64=supported,operatorframework.io/arch.arm64=supported,operatorframework.io/arch.ppc64le=supported,operatorframework.io/arch.s390x=supported
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.14.0 security, enhancement & bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2023:6832