Description of problem (please be detailed as possible and provide log snippests): odf-operator.v4.9.0 is in failed state Version of all relevant components (if applicable): openshift installer (4.9.0-0.nightly-2021-10-19-124539) odf version: 4.9.0-193.ci Does this issue impact your ability to continue to work with the product (please explain in detail what is the user impact)? Not able to deploy cluster Is there any workaround available to the best of your knowledge? Not tried Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? 1 Can this issue reproducible? 1/1 Can this issue reproduce from the UI? 1/1 If this is a regression, please provide more details to justify this: Steps to Reproduce: 1. Install ODF using ocs-ci ( UI deployment ) 2. verify operator status 3. Actual results: NAME DISPLAY VERSION REPLACES PHASE noobaa-operator.v4.9.0 NooBaa Operator 4.9.0 Succeeded ocs-operator.v4.9.0 OpenShift Container Storage 4.9.0 Succeeded odf-operator.v4.9.0 OpenShift Data Foundation 4.9.0 Failed Expected results: odf-operator.v4.9.0 should be in Succeeded state Additional info: > console pod is in Creating state NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES noobaa-operator-788f8d4c8b-8xmh4 1/1 Running 0 23m 10.128.4.9 compute-1 <none> <none> ocs-metrics-exporter-789dcf45c7-l7rfl 1/1 Running 0 23m 10.131.2.8 compute-5 <none> <none> ocs-operator-d585cd56c-vvf8t 1/1 Running 1 (19m ago) 23m 10.128.2.10 compute-2 <none> <none> odf-console-7b8bf97695-vhksv 0/1 ContainerCreating 0 24m <none> compute-2 <none> <none> odf-operator-controller-manager-786fb7cd5f-j5d68 1/2 Running 11 (49s ago) 24m 10.130.2.8 compute-3 <none> <none> rook-ceph-operator-5b6b59ff97-c9w4k 1/1 Running 0 23m 10.129.2.6 compute-0 <none> <none> > odf-console events Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 24m default-scheduler Successfully assigned openshift-storage/odf-console-7b8bf97695-vhksv to compute-2 Warning FailedMount 109s (x10 over 22m) kubelet Unable to attach or mount volumes: unmounted volumes=[odf-console-serving-cert], unattached volumes=[odf-console-serving-cert kube-api-access-mfxcr]: timed out waiting for the condition Warning FailedMount 106s (x19 over 24m) kubelet MountVolume.SetUp failed for volume "odf-console-serving-cert" : secret "odf-console-serving-cert" not found > multiple subscription found for noobaa operator from odf-operator-controller-manager log 2021-10-19T21:54:01.484595548Z 2021-10-19T21:54:01.484Z ERROR controllers.Subscription.SetupWithManager failed to create OCS subscriptions, will retry after 5 seconds {"error": "multiple Subscriptions found for package 'noobaa-operator': [noobaa-operator noobaa-operator-stable-4.9-redhat-operators-openshift-marketplace]"} 2021-10-19T21:54:01.484595548Z github.com/go-logr/zapr.(*zapLogger).Error > subscriptions NAME PACKAGE SOURCE CHANNEL noobaa-operator noobaa-operator redhat-operators stable-4.9 noobaa-operator-stable-4.9-redhat-operators-openshift-marketplace noobaa-operator redhat-operators stable-4.9 ocs-operator ocs-operator redhat-operators stable-4.9 odf-operator odf-operator redhat-operators stable-4.9 > install plan for noobaa-operator-stable-4.9-redhat-operators-openshift-marketplace : http://magna002.ceph.redhat.com/ocsci-jenkins/openshift-clusters/j-094vuf1cs36-t4a/j-094vuf1cs36-t4a_20211019T205525/logs/failed_testcase_ocs_logs_1634677472/test_deployment_ocs_logs/ocs_must_gather/quay-io-rhceph-dev-ocs-must-gather-sha256-0619998acac82e7a758421be7fe47a985142f0cf9f2400e89b7f5782a5eab00c/namespaces/openshift-storage/operators.coreos.com/installplans/install-v6xl4.yaml Job: https://ocs4-jenkins-csb-ocsqe.apps.ocp4.prod.psi.redhat.com/job/qe-deploy-ocs-cluster-prod/2103//consoleFull must gather: http://magna002.ceph.redhat.com/ocsci-jenkins/openshift-clusters/j-094vuf1cs36-t4a/j-094vuf1cs36-t4a_20211019T205525/logs/failed_testcase_ocs_logs_1634677472/test_deployment_ocs_logs/
*** This bug has been marked as a duplicate of bug 2014034 ***