Description of problem (please be detailed as possible and provide log snippests): Pre ODF 4.12 upgrade, noobaa-core pod was up and running. After the upgrade, noobaa-core is stuck in a clbo state because of an undefined symbol 'md5_ctx_mgr_init_base' that seems to be because of using md5 openssl instead of the library from ISA-L: $ cat noobaa-core-0-core.log Version is: 5.12.1-e52b2c3 calling noobaa_init.sh Running /usr/local/bin/node src/upgrade/upgrade_manager.js --upgrade_scripts_dir /root/node_modules/noobaa-core/src/upgrade/upgrade_scripts OpenSSL 1.1.1k FIPS 25 Mar 2021 setting up init_rand_seed: starting ... read_rand_seed: opening /dev/urandom ... May-26 18:14:58.406 [/20] [LOG] CONSOLE:: load_config_local: LOADED { DEFAULT_ACCOUNT_PREFERENCES: { ui_theme: 'LIGHT' }, REMOTE_NOOAA_NAMESPACE: 'openshift-storage', ALLOW_BUCKET_CREATE_ON_INTERNAL: false } May-26 18:15:01.469 [/20] [L0] core.rpc.rpc:: RPC register_n2n_proxy May-26 18:15:01.722 [/20] [LOG] CONSOLE:: loading .env file... May-26 18:15:01.725 [/20] [LOG] CONSOLE:: detect_fips_mode: found /proc/sys/crypto/fips_enabled with value 1 /usr/local/bin/node: symbol lookup error: /root/node_modules/noobaa-core/build/Release/nb_native.node: undefined symbol: md5_ctx_mgr_init_base upgrade_manager failed with exit code 127 noobaa_init.sh finished noobaa_init failed with exit code 127. aborting Version of all relevant components (if applicable): ODF 4.12 Noobaa 4.12 Does this issue impact your ability to continue to work with the product (please explain in detail what is the user impact)? Yes Is there any workaround available to the best of your knowledge? No
Verified in 4.12.4 , I don't see the issue anymore [root@rdr-sud-odf-b77b-syd05-bastion-0 ~]# oc get csv ocs-operator.v4.12.4-rhodf -n openshift-storage -o yaml |grep full_version full_version: 4.12.4-4 [root@rdr-sud-odf-b77b-syd05-bastion-0 ~]# [root@rdr-sud-odf-b77b-syd05-bastion-0 ~]# oc get cm cluster-config-v1 -n kube-system -o json | jq -r '.data' | grep -i "fips" "install-config": "additionalTrustBundlePolicy: Proxyonly\napiVersion: v1\nbaseDomain: redhat.com\ncompute:\n- architecture: ppc64le\n hyperthreading: Enabled\n name: worker\n platform: {}\n replicas: 0\ncontrolPlane:\n architecture: ppc64le\n hyperthreading: Enabled\n name: master\n platform: {}\n replicas: 3\nfips: true\nmetadata:\n creationTimestamp: null\n name: rdr-sud-odf-b77b\nnetworking:\n clusterNetwork:\n - cidr: 10.128.0.0/14\n hostPrefix: 23\n machineNetwork:\n - cidr: 10.0.0.0/16\n networkType: OVNKubernetes\n serviceNetwork:\n - 172.30.0.0/16\nplatform:\n none: {}\npublish: External\npullSecret: \"\"\nsshKey: 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDRxWGjBhDVQwPELaTGjn6bu7bq4j5xvviMGNfPwG9lT1Z6VY9yAm+yngWekrbE91S3DiXob2/YxEDauxKUZAqapVGBK5AMYUm+5goJcfAvuZC7FqdeIoDIHdxi9iTwYT7S9FsnI3zS0n8PNuHOBloue7qysVX4mQp8u9Bc+MzKUdEj0jZuHr47hccQABzadnDCt5IPhqdq5AXFtRg4PBfJBmFJFZ+rgtwYHxWb9+/3vLjUbznwORkV+8VQBQpv269g5CcnjNsfAFd8ShWwWtljaFxXiiIEv75UNe+og74Plgz1MjF2aoxeV4dvzxsL272fWZ/l2eqngYQCCByfEI9N\n sudeeshjohn '\n" [root@rdr-sud-odf-b77b-syd05-bastion-0 ~]# [root@rdr-sud-odf-b77b-syd05-bastion-0 ~]# oc get po -n openshift-storage NAME READY STATUS RESTARTS AGE csi-addons-controller-manager-674ccbf7d-4msbw 2/2 Running 0 12h csi-cephfsplugin-4hc7w 2/2 Running 0 12h csi-cephfsplugin-provisioner-5c9d4458d9-npvqs 5/5 Running 0 12h csi-cephfsplugin-provisioner-5c9d4458d9-z92mh 5/5 Running 0 12h csi-cephfsplugin-qrbtl 2/2 Running 0 12h csi-cephfsplugin-rvswk 2/2 Running 0 12h csi-rbdplugin-bznj7 3/3 Running 0 12h csi-rbdplugin-k4z67 3/3 Running 0 12h csi-rbdplugin-provisioner-86d996c57c-2bbrd 6/6 Running 0 12h csi-rbdplugin-provisioner-86d996c57c-w6v2l 6/6 Running 0 12h csi-rbdplugin-v7tsv 3/3 Running 0 12h noobaa-core-0 1/1 Running 0 12h noobaa-db-pg-0 1/1 Running 0 12h noobaa-endpoint-d4c97fbb5-jqbbb 1/1 Running 0 12h noobaa-operator-98555c78f-5rqj9 1/1 Running 1 (6h44m ago) 12h ocs-metrics-exporter-7f485b6644-kg98l 1/1 Running 0 12h ocs-operator-6cd8d7784-lkh9h 1/1 Running 0 12h odf-console-5b7fdb77dc-8tlsr 1/1 Running 0 12h odf-operator-controller-manager-6576876446-kr8qk 2/2 Running 0 12h rook-ceph-crashcollector-3e3dc22eaa60d2516bd5679905f6e5e8-9b8vf 1/1 Running 0 12h rook-ceph-crashcollector-59a40355db7f3b9f26027b79332559b4-wqqdm 1/1 Running 0 12h rook-ceph-crashcollector-8c8e9d9bc70d1ae665d9184f21850ae4-7jc62 1/1 Running 0 12h rook-ceph-mds-ocs-storagecluster-cephfilesystem-a-6655cd54cbghv 2/2 Running 0 12h rook-ceph-mds-ocs-storagecluster-cephfilesystem-b-f4dd5d6djpbx4 2/2 Running 0 12h rook-ceph-mgr-a-678dc75d68-tr5zs 2/2 Running 0 12h rook-ceph-mon-a-6cf6987878-t7qgc 2/2 Running 0 12h rook-ceph-mon-b-58d798d769-sxfcz 2/2 Running 0 12h rook-ceph-mon-c-586d578fc7-5qn4c 2/2 Running 0 12h rook-ceph-operator-79c74775bd-4mb8t 1/1 Running 0 12h rook-ceph-osd-0-6f98d5f98-q98x5 2/2 Running 0 12h rook-ceph-osd-1-5db6ff748b-jqxqc 2/2 Running 0 12h rook-ceph-osd-2-557f6598b8-w9q77 2/2 Running 0 12h rook-ceph-osd-prepare-7131696a70f2d4c43547de084775cd9f-tqllp 0/1 Completed 0 12h rook-ceph-osd-prepare-85ff59c8b90c16707e9e128ca7feb516-g8zkx 0/1 Completed 0 12h rook-ceph-osd-prepare-d4726b9ab306fc1ed734368f1f9ac336-vbzmk 0/1 Completed 0 12h rook-ceph-rgw-ocs-storagecluster-cephobjectstore-a-85448d6v5slj 2/2 Running 0 12h [root@rdr-sud-odf-b77b-syd05-bastion-0 ~]# [root@rdr-sud-odf-b77b-syd05-bastion-0 ~]# [root@rdr-sud-odf-b77b-syd05-bastion-0 ~]# oc get po -n openshift-storage | grep nooba noobaa-core-0 1/1 Running 0 12h noobaa-db-pg-0 1/1 Running 0 12h noobaa-endpoint-d4c97fbb5-jqbbb 1/1 Running 0 12h noobaa-operator-98555c78f-5rqj9 1/1 Running 1 (6h44m ago) 12h [root@rdr-sud-odf-b77b-syd05-bastion-0 ~]# [root@rdr-sud-odf-b77b-syd05-bastion-0 ~]# [root@rdr-sud-odf-b77b-syd05-bastion-0 ~]# [root@rdr-sud-odf-b77b-syd05-bastion-0 ~]# [root@rdr-sud-odf-b77b-syd05-bastion-0 ~]# oc version Client Version: 4.12.0-0.nightly-ppc64le-2023-06-06-010647 Kustomize Version: v4.5.7 Server Version: 4.12.0-0.nightly-ppc64le-2023-06-06-010647 Kubernetes Version: v1.25.10+3fe2906 [root@rdr-sud-odf-b77b-syd05-bastion-0 ~]# [root@rdr-sud-odf-b77b-syd05-bastion-0 ~]#
Hello Nimrod, Do we have a root cause as to what caused this issue?