This bug was initially created as a copy of Bug #2073920 I am copying this bug because: Description of problem (please be detailed as possible and provide log snippests): We installed the Redhat ODF operator from the console and try to create the storagesystem for ibm flashsystem with no encryption option. The osd pods were not created and the rook-ceph-osd-prepare pods are stuck in CrashLoopBaskOff with the following error: 022-04-06 11:55:37.199155 E | op-osd: failed to provision OSD(s) on PVC ocs-deviceset-ibm-odf-test-0-data-0jmkhn. &{OSDs:[] Status:failed PvcBackedOSD:true Message:failed to set kek as an environment variable: key encryption key is empty} Version of all relevant components (if applicable): RH ODF operator: Image quay.io/rhceph-dev/ocs-registry:4.10.0-211 OCP version: 4.10.3 Does this issue impact your ability to continue to work with the product (please explain in detail what is the user impact)? yes, we can't install our new ibm odf operator Is there any workaround available to the best of your knowledge? no Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? 1 Can this issue reproducible? yes Can this issue reproduce from the UI? yes If this is a regression, please provide more details to justify this: Steps to Reproduce: 1. Change redhat-operator catalog source to use quay.io/rhceph-dev/ocs-registry:4.10.0-211 image 2. Install odf operator for ocp 4.10 from the UI 3. Install StorageSystem ibm-flashsystem from the UI Actual results: rook-ceph-osd-prepare pods are stuck in CrashLoopBackOff with the kms error: 022-04-06 11:55:37.199155 E | op-osd: failed to provision OSD(s) on PVC ocs-deviceset-ibm-odf-test-0-data-0jmkhn. &{OSDs:[] Status:failed PvcBackedOSD:true Message:failed to set kek as an environment variable: key encryption key is empty} Expected results: successfully create the osd pods with no kms encryption Additional info:
*** Bug 2074513 has been marked as a duplicate of this bug. ***
"because IBM environment variables were present" Otherwise LGTM, thanks!
Hi, After installing with ODF 4.10.1 image, this issue does not appear any more. The OSDs prepare jobs are now in state 'in progress' and no OSDs pods were deployed - I opened another BZ ticket as Sébastien requested, to investigate the new issue https://bugzilla.redhat.com/show_bug.cgi?id=2081431 Thanks
Moving to VERIFIED based on comment #16
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat OpenShift Data Foundation 4.10.1 Bug Fix Update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2022:2182