Description of problem (please be detailed as possible and provide log snippests): noobaa-db pod does not start in external mode with the latest build 4.13.0-121.stable with the following error Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 8m28s default-scheduler Successfully assigned openshift-storage/noobaa-db-pg-0 to worker-1.ocpm4202001.lnxero1.boe Warning FailedMount 6m25s kubelet Unable to attach or mount volumes: unmounted volumes=[db], unattached volumes=[db kube-api-access-w6d85 noobaa-postgres-initdb-sh-volume noobaa-postgres-config-volume]: timed out waiting for the condition Warning FailedMount 4m10s kubelet Unable to attach or mount volumes: unmounted volumes=[db], unattached volumes=[noobaa-postgres-config-volume db kube-api-access-w6d85 noobaa-postgres-initdb-sh-volume]: timed out waiting for the condition Warning FailedMount 112s kubelet Unable to attach or mount volumes: unmounted volumes=[db], unattached volumes=[kube-api-access-w6d85 noobaa-postgres-initdb-sh-volume noobaa-postgres-config-volume db]: timed out waiting for the condition Warning FailedMount 4s (x12 over 8m19s) kubelet MountVolume.MountDevice failed for volume "pvc-21571bbb-d2d8-4b18-be5a-200be8b57847" : rpc error: code = Internal desc = error generating volume 0001-0011-openshift-storage-0000000000000006-63132fe1-ce6f-4518-8af8-f4c4a43024e5: rados: ret=-108, Cannot send after transport endpoint shutdown Warning FailedMount 50s (x2 over 7m40s) kubelet Unable to attach or mount volumes: unmounted volumes=[db], unattached volumes=[db kube-api-access-w6d85 noobaa-postgres-initdb-sh-volume noobaa-postgres-config-volume]: timed out waiting for the condition # oc logs noobaa-db-pg-0 -n openshift-storage Defaulted container "db" out of: db, init (init), initialize-database (init) cat: /var/lib/pgsql/data/userdata/PG_VERSION: Input/output error Version of all relevant components (if applicable): mcg-operator: v4.13.0-121.stable ocs-operator: v4.13.0-121.stable odf-operator: v4.13.0-121.stable odr-cluster-operator: v4.13.0-121.stable Does this issue impact your ability to continue to work with the product (please explain in detail what is the user impact)? Yes Is there any workaround available to the best of your knowledge? No Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? Can this issue reproducible? Yes Can this issue reproduce from the UI? If this is a regression, please provide more details to justify this: Steps to Reproduce: 1. Deploy ODF in external mode 2. 3. Actual results: noobaa-db pod does not start in external mode with the latest build 4.13.0-121.stable Expected results: noobaa-db pod should be up and running Additional info:
Must-gather logs: https://drive.google.com/file/d/1pvjlW2RveWsHnj5ZnTczuUCgszDnN-n-/view?usp=share_link