Created attachment 1960169 [details] must gather logs Created attachment 1960169 [details] must gather logs Description of problem (please be detailed as possible and provide log snippests): Version of all relevant components (if applicable): 4.13 Does this issue impact your ability to continue to work with the product (please explain in detail what is the user impact)? I can continue work without any issue Is there any workaround available to the best of your knowledge? Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? Can this issue reproducible? YES Can this issue reproduce from the UI? YES If this is a regression, please provide more details to justify this: Steps to Reproduce: 1. Install ODF operator 2. Configure kubernetes auth method as mention in DOC: https://access.redhat.com/documentation/en-us/red_hat_openshift_data_foundation/4.11/html/deploying_openshift_data_foundation_using_bare_metal_infrastructure/deploy-using-local-storage-devices-bm#enabling-cluster-wide-encryprtion-with-the-kubernetes-authentication-using-kms_local-bare-metal 3. Create storage system 4. Select enable data encryption for block and file. 5. Select StorageClass Encryption (refer attached screenshot) 6. Click on next and complete storage system creation. Actual results: Storagecluster not moved out of 'Progressing' Phase. Expected results: Storage cluster should be in 'Ready' state. Additional info: The storage cluster has been enabled with storage class encryption and the 'ocs-storagecluster-ceph-rbd-encrypted' storage class has been created. However, the storage cluster remains in a 'Progressing' state even though all pods are up and running. Although I am able to use all the functionality without any issue. StorgeCluster Details ============================================== ❯ oc get storagecluster -n openshift-storage ─╯ NAME AGE PHASE EXTERNAL CREATED AT VERSION ocs-storagecluster 23m Progressing 2023-04-26T16:36:26Z 4.13.0 Storageclass Output =============================================== ❯ oc get storageclass ─╯ NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE gp2-csi ebs.csi.aws.com Delete WaitForFirstConsumer true 3h54m gp3-csi (default) ebs.csi.aws.com Delete WaitForFirstConsumer true 3h54m ocs-storagecluster-ceph-rbd openshift-storage.rbd.csi.ceph.com Delete Immediate true 19m ocs-storagecluster-ceph-rbd-encrypted openshift-storage.rbd.csi.ceph.com Delete Immediate false 19m ocs-storagecluster-cephfs openshift-storage.cephfs.csi.ceph.com Delete Immediate true 19m
PR up for review: https://github.com/red-hat-storage/ocs-operator/pull/2040
MODIFIED will be when PR is merged in 4.13... we need acks for 4.13 for this BZ as well...
@ebenahar , can you please provide us with QA_ACK+ flag?
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat OpenShift Data Foundation 4.13.0 enhancement and bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2023:3742