Description of problem (please be detailed as possible and provide log snippests): Upgrade with FIPS environment is failing with: 2021-05-05T10:19:20.161388461Z [32mMay-5 10:19:20.161[35m [/13] [36m [L0][39m core.util.postgres_client:: creating table iostats 2021-05-05T10:19:20.161438248Z [32mMay-5 10:19:20.161[35m [/13] [36m [L0][39m core.util.postgres_client:: creating table func_stats 2021-05-05T10:19:20.161488534Z [32mMay-5 10:19:20.161[35m [/13] [36m [L0][39m core.util.postgres_client:: creating table activitylogs 2021-05-05T10:19:20.161538988Z [32mMay-5 10:19:20.161[35m [/13] [36m [L0][39m core.util.postgres_client:: creating table alertslogs 2021-05-05T10:19:20.161606809Z [32mMay-5 10:19:20.161[35m [/13] [36m [L0][39m core.util.postgres_client:: creating table system_history 2021-05-05T10:19:20.168804157Z internal/crypto/hash.js:46 2021-05-05T10:19:20.168804157Z this[kHandle] = new _Hash(algorithm, xofLen); 2021-05-05T10:19:20.168804157Z ^ 2021-05-05T10:19:20.168804157Z 2021-05-05T10:19:20.168804157Z Error: error:060800C8:digital envelope routines:EVP_DigestInit_ex:disabled for FIPS 2021-05-05T10:19:20.168804157Z [90m at new Hash (internal/crypto/hash.js:46:19)[39m 2021-05-05T10:19:20.168804157Z [90m at Object.createHash (crypto.js:115:10)[39m 2021-05-05T10:19:20.168804157Z at md5 (/root/node_modules/[4mnoobaa-core[24m/node_modules/[4mpg[24m/lib/utils.js:168:17) 2021-05-05T10:19:20.168804157Z at Object.postgresMd5PasswordHash (/root/node_modules/[4mnoobaa-core[24m/node_modules/[4mpg[24m/lib/utils.js:173:15) 2021-05-05T10:19:20.168804157Z at /root/node_modules/[4mnoobaa-core[24m/node_modules/[4mpg[24m/lib/client.js:244:36 2021-05-05T10:19:20.168804157Z at Client._checkPgPass (/root/node_modules/[4mnoobaa-core[24m/node_modules/[4mpg[24m/lib/client.js:225:7) 2021-05-05T10:19:20.168804157Z at Client._handleAuthMD5Password (/root/node_modules/[4mnoobaa-core[24m/node_modules/[4mpg[24m/lib/client.js:243:10) 2021-05-05T10:19:20.168804157Z [90m at Connection.emit (events.js:315:20)[39m 2021-05-05T10:19:20.168804157Z [90m at Connection.EventEmitter.emit (domain.js:467:12)[39m 2021-05-05T10:19:20.168804157Z at /root/node_modules/[4mnoobaa-core[24m/node_modules/[4mpg[24m/lib/connection.js:115:12 { 2021-05-05T10:19:20.168804157Z library: [32m'digital envelope routines'[39m, 2021-05-05T10:19:20.168804157Z function: [32m'EVP_DigestInit_ex'[39m, 2021-05-05T10:19:20.168804157Z reason: [32m'disabled for FIPS'[39m, 2021-05-05T10:19:20.168804157Z code: [32m'ERR_OSSL_EVP_DISABLED_FOR_FIPS'[39m 2021-05-05T10:19:20.168804157Z } http://magna002.ceph.redhat.com/ocsci-jenkins/openshift-clusters/j003aife3c333-ua/j003aife3c333-ua_20210505T080105/logs/failed_testcase_ocs_logs_1620205433/test_upgrade_ocs_logs/ocs_must_gather/quay-io-rhceph-dev-ocs-must-gather-sha256-76da8d529f412bb79d33d99fec3d180953c257b904fbbd49f102d5637b17fc04/namespaces/openshift-storage/pods/noobaa-upgrade-job-wdfhq/migrate-job/migrate-job/logs/current.log Version of all relevant components (if applicable): 4.6.4 upgrade to 4.7.0-377.ci We have another occurrence of this bug here https://ocs4-jenkins-csb-ocsqe.apps.ocp4.prod.psi.redhat.com/job/qe-deploy-ocs-cluster-prod/687/ With usage of build: quay.io/rhceph-dev/ocs-registry:4.7.0-377.ci This time on this env type: AWS IPI FIPS ENCRYPTION 3AZ RHCOS 3Masters 3Workers 3Infra nodes First one was on: AWS IPI 3AZ RHCOS 3Masters 3Workers http://magna002.ceph.redhat.com/ocsci-jenkins/openshift-clusters/j003aife3c333-ua/j003aife3c333-ua_20210505T080105/logs/failed_testcase_ocs_logs_1620205433/test_upgrade_ocs_logs/ocs_must_gather/quay-io-rhceph-dev-ocs-must-gather-sha256-76da8d529f412bb79d33d99fec3d180953c257b904fbbd49f102d5637b17fc04/namespaces/openshift-storage/oc_output/pods_-owide Here I see: noobaa-db-0 1/1 Running 0 14m 10.130.2.24 ip-10-0-219-56.us-east-2.compute.internal <none> <none> noobaa-db-pg-0 1/1 Running 0 15m 10.130.2.23 ip-10-0-219-56.us-east-2.compute.internal <none> <none> noobaa-operator-7c64ddbcb-pd7mn 1/1 Running 0 15m 10.129.2.23 ip-10-0-150-221.us-east-2.compute.internal <none> <none> noobaa-upgrade-job-5wjbk 0/1 Error 0 12m 10.129.2.25 ip-10-0-150-221.us-east-2.compute.internal <none> <none> noobaa-upgrade-job-crlrl 0/1 Error 0 10m 10.129.2.29 ip-10-0-150-221.us-east-2.compute.internal <none> <none> noobaa-upgrade-job-rz2cr 0/1 Error 0 11m 10.129.2.27 ip-10-0-150-221.us-east-2.compute.internal <none> <none> noobaa-upgrade-job-s8c6j 0/1 Error 0 12m 10.129.2.26 ip-10-0-150-221.us-east-2.compute.internal <none> <none> noobaa-upgrade-job-wdfhq 0/1 Error 0 11m 10.129.2.28 ip-10-0-150-221.us-east-2.compute.internal <none> <none> Does this issue impact your ability to continue to work with the product (please explain in detail what is the user impact)? Is there any workaround available to the best of your knowledge? Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? Can this issue reproducible? Yes, another occurrence in this run: https://ocs4-jenkins-csb-ocsqe.apps.ocp4.prod.psi.redhat.com/job/qe-deploy-ocs-cluster-prod/691 http://magna002.ceph.redhat.com/ocsci-jenkins/openshift-clusters/j004aife3c333-ua/j004aife3c333-ua_20210505T160748/logs/failed_testcase_ocs_logs_1620234851/test_upgrade_ocs_logs/ Can this issue reproduce from the UI? Haven't tried If this is a regression, please provide more details to justify this: We haven't tested this scenario before as we extended our upgrade pipeline recently. Steps to Reproduce: 1. Instal env on AWS IPI FIPS ENCRYPTION 3AZ RHCOS 3Masters 3Workers 3Infra nodes 2. Run upgrade Actual results: Failing on FIPS env Expected results: Have upgrade of MCG passed Additional info: Full must gather logs: http://magna002.ceph.redhat.com/ocsci-jenkins/openshift-clusters/j003aife3c333-ua/j003aife3c333-ua_20210505T080105/logs/failed_testcase_ocs_logs_1620205433/test_upgrade_ocs_logs/
We ran verification in this job: https://ocs4-jenkins-csb-ocsqe.apps.ocp4.prod.psi.redhat.com/job/qe-deploy-ocs-cluster-prod/694/ With build: quay.io/rhceph-dev/ocs-registry:4.7.0-381.ci And it passed, so I can verify once more in RC build and then mark it as verified.
Verified with RC 9 build quay.io/rhceph-dev/ocs-registry:4.7.0-383.ci: https://ocs4-jenkins-csb-ocsqe.apps.ocp4.prod.psi.redhat.com/job/qe-deploy-ocs-cluster-prod/709/consoleFull Log path: http://magna002.ceph.redhat.com/ocsci-jenkins/openshift-clusters/j006aife3c333-ua/j006aife3c333-ua_20210506T205801 So once the bug is moved to ON_QE I will move to verified. Thanks
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: Red Hat OpenShift Container Storage 4.7.0 security, bug fix, and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:2041