Bug 2131330

Summary: Files in the noobaa diagnostics are of zero size in (ODF - MCG)
Product: [Red Hat Storage] Red Hat OpenShift Data Foundation Reporter: Ravi K Komanduri <rkomandu>
Component: Multi-Cloud Object GatewayAssignee: Nimrod Becker <nbecker>
Status: CLOSED NOTABUG QA Contact: Ben Eli <belimele>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 4.11CC: etamir, nbecker, ocs-bugs, odf-bz-bot
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2022-10-06 06:04:58 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
noobaa db pod with zero size that got generated via MG none

Description Ravi K Komanduri 2022-09-30 17:44:19 UTC
Created attachment 1915328 [details]
noobaa db pod with zero size that got generated via MG

Description of problem (please be detailed as possible and provide log
snippests):

This is continuing from previoud BZ defect 2026342 as it was closed but the defect still exist. 

When the MG is run, in the noobaa-diagnostic logs, we still see few files with "zero size" and this is specific to ODF 4.11.1 release with MCG as Noobaa

log file of noobaa-db-pg-0-init.log 

-rw-r--r-- root/root       687 2022-09-29 02:13 db-noobaa-db-pg-0-pvc-describe.txt
-rw-r--r-- root/root  22883969 2022-09-29 02:13 noobaa-core-0-core.log
-rw-r--r-- root/root      5869 2022-09-29 02:13 noobaa-core-0-pod-describe.txt
-rw-r--r-- root/root      1472 2022-09-29 02:13 noobaa-db-pg-0-db.log
-rw-r--r-- root/root         0 2022-09-29 02:13 noobaa-db-pg-0-init.log
-rw-r--r-- root/root       217 2022-09-29 02:13 noobaa-db-pg-0-initialize-database.log
-rw-r--r-- root/root      6041 2022-09-29 02:13 noobaa-db-pg-0-pod-describe.txt
-rw-r--r-- root/root   7393749 2022-09-29 02:13 noobaa-default-backing-store-noobaa-pod-b1d19499-noobaa-agent.log
-rw-r--r-- root/root      3563 2022-09-29 02:13 noobaa-default-backing-store-noobaa-pod-b1d19499-pod-describe.txt
-rw-r--r-- root/root       767 2022-09-29 02:13 noobaa-default-backing-store-noobaa-pvc-b1d19499-pvc-describe.txt
-rw-r--r-- root/root    480988 2022-09-29 02:13 noobaa-endpoint-68fdb67d97-dh8tm-endpoint.log
-rw-r--r-- root/root      5752 2022-09-29 02:13 noobaa-endpoint-68fdb67d97-dh8tm-pod-describe.txt
-rw-r--r-- root/root    492700 2022-09-29 02:13 noobaa-endpoint-68fdb67d97-fdf45-endpoint.log
-rw-r--r-- root/root      5757 2022-09-29 02:13 noobaa-endpoint-68fdb67d97-fdf45-pod-describe.txt
-rw-r--r-- root/root    585165 2022-09-29 02:13 noobaa-endpoint-68fdb67d97-qtx9k-endpoint.log
-rw-r--r-- root/root      5752 2022-09-29 02:13 noobaa-endpoint-68fdb67d97-qtx9k-pod-describe.txt
-rw-r--r-- root/root      1096 2022-09-29 02:13 noobaa-endpoint-scc-describe.txt
-rw-r--r-- root/root       353 2022-09-29 02:13 noobaa-operator-6b4466545b-h97kw-noobaa-operator-previous.log
-rw-r--r-- root/root   7415384 2022-09-29 02:13 noobaa-operator-6b4466545b-h97kw-noobaa-operator.log
-rw-r--r-- root/root      6216 2022-09-29 02:13 noobaa-operator-6b4466545b-h97kw-pod-describe.txt
-rw-r--r-- root/root      1085 2022-09-29 02:13 noobaa-scc-describe.txt



Version of all relevant components (if applicable):

oc get clusterversion
NAME      VERSION   AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.11.0    True        False         43d     Cluster version is 4.11.0

oc get csv -n openshift-storage
NAME                                   DISPLAY                       VERSION               REPLACES                          PHASE
mcg-operator.v4.11.1                   NooBaa Operator               4.11.1                mcg-operator.v4.11.0              Succeeded
metallb-operator.4.11.0-202209161807   MetalLB Operator              4.11.0-202209161807                                     Succeeded
ocs-operator.v4.11.1                   OpenShift Container Storage   4.11.1                ocs-operator.v4.11.0              Succeeded
odf-csi-addons-operator.v4.11.1        CSI Addons                    4.11.1                odf-csi-addons-operator.v4.11.0   Succeeded
odf-operator.v4.11.1                   OpenShift Data Foundation     4.11.1                odf-operator.v4.11.0              Succeeded


We don't use the rook/CEPH and the rook-mon pod. 

oc get configmap -n openshift-storage
NAME                          DATA   AGE
4fd470de.openshift.io         0      46h
ab76f4c9.openshift.io         0      46h
csi-addons-manager-config     1      46h
e8cd140a.openshift.io         0      46h
kube-root-ca.crt              1      46h
noobaa-config                 3      46h
noobaa-operator-lock          0      46h
noobaa-postgres-config        1      46h
noobaa-postgres-initdb-sh     1      46h
odf-operator-manager-config   26     46h
openshift-service-ca.crt      1      46h
rook-ceph-operator-config     4      46h


Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?

User is impacted as the MG when requested from Customer would show an empty file and analysis would be difficult if we need to do any RCA 

Is there any workaround available to the best of your knowledge?
NO

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?

1 

Can this issue reproducible?
Should be , whoever deployed ODF-(MCG)

Can this issue reproduce from the UI?


If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1. Issue MG of ODF 4.11 (MCG option as use that only for Noobaa) 
2. inspect the noobaa_diagnostics_*.tar.gz
3.


Actual results:


Expected results:
Shouldn't have "zero file size" 

Additional info:

Comment 2 Nimrod Becker 2022-10-02 07:19:03 UTC
I see only noobaa-db-pg-0-init.log as 0

This is expected when there are no issues with inviting the postgres DB, as long as its 0 we are happy. If its to 0 it would mean there is a problem and the db won't function.

Is that the only file observed as 0 ? (from the comment it seems o)

Comment 3 Ravi K Komanduri 2022-10-03 06:52:44 UTC
Nimrod, Am not sure what the "Docs needed here".

Comment 4 Nimrod Becker 2022-10-03 09:40:54 UTC
I think it was due to the comment being private, can you see it now ?

Comment 5 Ravi K Komanduri 2022-10-04 07:54:28 UTC
can see now. make sense w/r/t the init-db file being zero as an exception, since it is good that there are no issues. 

Haven't thought that. Can I close this then ?

Comment 6 Nimrod Becker 2022-10-06 06:04:58 UTC
ofc, I will close. Thanks for confirming.

Comment 7 Ravi K Komanduri 2022-10-06 06:54:40 UTC
@nbecker 
looks like the defect 2131331 also has the same information, not sure how this happened, as 1 defect only opened. Can you close that as well

Comment 8 Ravi K Komanduri 2022-10-06 07:00:20 UTC
*** Bug 2131331 has been marked as a duplicate of this bug. ***

Comment 9 Nimrod Becker 2022-10-06 07:01:13 UTC
I see you closed it, probably a problem with BZ regarding opening two issues (see the IDs, they are consecutive )