Bug 2175612
Summary: | noobaa-core-0 crashing and storagecluster not getting to ready state during ODF deployment with FIPS enabled in 4.13cluster | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat OpenShift Data Foundation | Reporter: | narayanspg <ngowda> |
Component: | Multi-Cloud Object Gateway | Assignee: | Liran Mauda <lmauda> |
Status: | CLOSED ERRATA | QA Contact: | Sagi Hirshfeld <shirshfe> |
Severity: | medium | Docs Contact: | |
Priority: | unspecified | ||
Version: | 4.13 | CC: | branto, dahorak, ebenahar, kelwhite, kimberlysnider16, kramdoss, lmauda, muagarwa, nbecker, ngowda, ocs-bugs, odf-bz-bot, pbalogh, sasundar, tdesala |
Target Milestone: | --- | Keywords: | Regression, TestBlocker |
Target Release: | ODF 4.13.0 | ||
Hardware: | ppc64le | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: |
Cause:
isa-l.gyp sources were not handled properly across the different archs supported
Fix:
Fix fetching and handling the isa-l.gyp sources to be as expected in all archs
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2023-06-21 15:24:25 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
narayanspg
2023-03-06 06:32:59 UTC
must-gather logs are available at https://drive.google.com/file/d/12KJMx_Q9loBQif6dxAh7gtd6i-RYkr_F/view?usp=share_link Hi , I have been asked needinfo but am not able to see them as comments are private. please let me know the required info. Thanks, Narayan Updated in DM. Hi Liran, tested on ODF build 4.13.0-90 and 4.13.0-92. both build environments have same issue(noobaa-core-0 crashing and storagecluster not getting to ready state). have retained cluster with build version 4.13.0-92 if you would like to access it. also have must gather logs for build 4.13.0-90 build. Hi Liran, tested on build 4.13.0-89 and still see the noobaa-core crashing due to which storagecluster is not getting to ready state. shared the cluster details over IM if you would like verify. [root@fips3-cicd-odf-f92c-sao01-bastion-0 ~]# oc describe csv odf-operator.v4.13.0 -n openshift-storage | grep full Labels: full_version=4.13.0-89 f:full_version: [root@fips3-cicd-odf-f92c-sao01-bastion-0 ~]# oc get pods NAME READY STATUS RESTARTS AGE csi-addons-controller-manager-78f5bcc9fb-lw7jj 2/2 Running 0 43m csi-cephfsplugin-6kqpk 2/2 Running 0 25m csi-cephfsplugin-hq84z 2/2 Running 0 25m csi-cephfsplugin-jrd4r 2/2 Running 0 25m csi-cephfsplugin-provisioner-68b9b6dd87-99cwv 5/5 Running 0 25m csi-cephfsplugin-provisioner-68b9b6dd87-hd44m 5/5 Running 0 25m csi-rbdplugin-hcph7 3/3 Running 0 25m csi-rbdplugin-provisioner-5d56c8bc84-6xwl4 6/6 Running 0 25m csi-rbdplugin-provisioner-5d56c8bc84-8cslz 6/6 Running 0 25m csi-rbdplugin-vwm8n 3/3 Running 0 25m csi-rbdplugin-zcsw9 3/3 Running 0 25m noobaa-core-0 0/1 CrashLoopBackOff 8 (4m49s ago) 22m noobaa-db-pg-0 1/1 Running 0 22m noobaa-operator-645fdb94b8-gvs9n 1/1 Running 0 44m ocs-metrics-exporter-57895c9ccf-trbbh 1/1 Running 0 43m ocs-operator-585f588f7b-zhr7t 1/1 Running 0 43m odf-console-6d98c4c849-z889p 1/1 Running 0 44m odf-operator-controller-manager-75b9bb8675-mhjl5 2/2 Running 0 44m rook-ceph-crashcollector-79d743de54eb92f2153e337679efd005-d8m4p 1/1 Running 0 22m rook-ceph-crashcollector-89626277bd0a03dff0a95de9f03e3802-zcw4f 1/1 Running 0 23m rook-ceph-crashcollector-e89289c97a67939b890cfbc685c93798-qjckh 1/1 Running 0 23m rook-ceph-exporter-79d743de54eb92f2153e337679efd005-55f76bq7rzf 0/1 CreateContainerError 0 23m rook-ceph-exporter-79d743de54eb92f2153e337679efd005-5c994cq89dd 0/1 CreateContainerError 0 22m rook-ceph-exporter-89626277bd0a03dff0a95de9f03e3802-7499bb8t69j 0/1 CreateContainerError 0 23m rook-ceph-exporter-89626277bd0a03dff0a95de9f03e3802-8568f9l5txk 0/1 CreateContainerError 0 23m rook-ceph-exporter-e89289c97a67939b890cfbc685c93798-6ffb64qr72t 0/1 CreateContainerError 0 23m rook-ceph-mds-ocs-storagecluster-cephfilesystem-a-687695d4m9m8q 2/2 Running 0 23m rook-ceph-mds-ocs-storagecluster-cephfilesystem-b-5b9fbf69v8ccq 2/2 Running 0 22m rook-ceph-mgr-a-5d445988bb-jnd4j 2/2 Running 0 23m rook-ceph-mon-a-cd5f74545-dcx6s 2/2 Running 0 24m rook-ceph-mon-b-54b46b7c68-wsm6t 2/2 Running 0 24m rook-ceph-mon-c-97cfcf46b-6tcg4 2/2 Running 0 24m rook-ceph-operator-5679dd6894-5ff5k 1/1 Running 0 25m rook-ceph-osd-0-86c679fb8c-grmgz 2/2 Running 0 23m rook-ceph-osd-1-6996bf55c6-5pnh8 2/2 Running 0 23m rook-ceph-osd-2-7d8656c8f7-pbd7j 2/2 Running 0 23m rook-ceph-osd-prepare-3f5c0325f7bf6544a089bebc082f7032-wjrhq 0/1 Completed 0 23m rook-ceph-osd-prepare-769ce41a3c76ea9bd790b0c372e65411-8thr8 0/1 Completed 0 23m rook-ceph-osd-prepare-8ea6c20b536c1991820a71bbd9f05119-x6627 0/1 Completed 0 23m rook-ceph-rgw-ocs-storagecluster-cephobjectstore-a-64556bdjw24t 2/2 Running 0 22m [root@fips3-cicd-odf-f92c-sao01-bastion-0 ~]# must gather logs here: https://drive.google.com/file/d/1ruqsl17SjjfNMUMKTIscHNDDH3_qe921/view?usp=share_link Hi Liran, Let me know which build the fix will be available. Thanks. Tried on the latest build with fips enabled. storagecluster is not getting to ready state though all pods are up and running with reason - "Waiting on Nooba instance to finish initialization" you can access the cluster with below details: web_console_url = "https://console-openshift-console.apps.fips5-cicd-odf-e564.redhat.com" kubeadm/opyKm-ysFdo-uFjDA-A3eaM etc_hosts_entries = <<EOT 169.57.180.66 api.fips5-cicd-odf-e564.redhat.com console-openshift-console.apps.fips5-cicd-odf-e564.redhat.com integrated-oauth-server-openshift-authentication.apps.fips5-cicd-odf-e564.redhat.com oauth-openshift.apps.fips5-cicd-odf-e564.redhat.com prometheus-k8s-openshift-monitoring.apps.fips5-cicd-odf-e564.redhat.com grafana-openshift-monitoring.apps.fips5-cicd-odf-e564.redhat.com example.apps.fips5-cicd-odf-e564.redhat.com EOT [root@fips5-cicd-odf-e564-sao01-bastion-0 ~]# oc describe csv odf-operator.v4.13.0 -n openshift-storage | grep full Labels: full_version=4.13.0-130 f:full_version: [root@fips5-cicd-odf-e564-sao01-bastion-0 ~]# [root@fips5-cicd-odf-e564-sao01-bastion-0 ~]# [root@fips5-cicd-odf-e564-sao01-bastion-0 ~]# oc logs noobaa-core-0 | grep fips Apr-10 10:07:56.195 [/20] [LOG] CONSOLE:: detect_fips_mode: found /proc/sys/crypto/fips_enabled with value 1 detect_fips_mode: found /proc/sys/crypto/fips_enabled with value 1 detect_fips_mode: found /proc/sys/crypto/fips_enabled with value 1 detect_fips_mode: found /proc/sys/crypto/fips_enabled with value 1 [root@fips5-cicd-odf-e564-sao01-bastion-0 ~]# oc get storagecluster NAME AGE PHASE EXTERNAL CREATED AT VERSION ocs-storagecluster 179m Progressing 2023-04-10T10:03:25Z 4.13.0 [root@fips5-cicd-odf-e564-sao01-bastion-0 ~]# oc describe storagecluster ocs-storagecluster | grep Waiting Message: Waiting on Nooba instance to finish initialization [root@fips5-cicd-odf-e564-sao01-bastion-0 ~]# The above Cluster will be deleted EOD today. Let us know if it is getting used. Moving the bug to assigned based on the above comment. Hi Liran, as discussed you can close this one as noobaa pods are not crashing and original error is not seen now. created new BZ 2187602. Thank you. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat OpenShift Data Foundation 4.13.0 enhancement and bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2023:3742 This website also displays this error: https://bitlife-online.io |