This bug was initially created as a copy of Bug #2073920 I am copying this bug because: Description of problem (please be detailed as possible and provide log snippests): We installed the Redhat ODF operator from the console and try to create the storagesystem for ibm flashsystem with no encryption option. The osd pods were not created and the rook-ceph-osd-prepare pods are stuck in CrashLoopBaskOff with the following error: 022-04-06 11:55:37.199155 E | op-osd: failed to provision OSD(s) on PVC ocs-deviceset-ibm-odf-test-0-data-0jmkhn. &{OSDs:[] Status:failed PvcBackedOSD:true Message:failed to set kek as an environment variable: key encryption key is empty} Version of all relevant components (if applicable): RH ODF operator: Image quay.io/rhceph-dev/ocs-registry:4.10.0-211 OCP version: 4.10.3 Does this issue impact your ability to continue to work with the product (please explain in detail what is the user impact)? yes, we can't install our new ibm odf operator Is there any workaround available to the best of your knowledge? no Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? 1 Can this issue reproducible? yes Can this issue reproduce from the UI? yes If this is a regression, please provide more details to justify this: Steps to Reproduce: 1. Change redhat-operator catalog source to use quay.io/rhceph-dev/ocs-registry:4.10.0-211 image 2. Install odf operator for ocp 4.10 from the UI 3. Install StorageSystem ibm-flashsystem from the UI Actual results: rook-ceph-osd-prepare pods are stuck in CrashLoopBackOff with the kms error: 022-04-06 11:55:37.199155 E | op-osd: failed to provision OSD(s) on PVC ocs-deviceset-ibm-odf-test-0-data-0jmkhn. &{OSDs:[] Status:failed PvcBackedOSD:true Message:failed to set kek as an environment variable: key encryption key is empty} Expected results: successfully create the osd pods with no kms encryption Additional info:
Sunil Is this just a dup of https://bugzilla.redhat.com/show_bug.cgi?id=2074558? *** This bug has been marked as a duplicate of bug 2074558 ***