Bug 2015408

Summary: Must-gather: Skip collection of noobaa related resources if Storagecluster is not yet created(continued from Bug 1934625)
Product: [Red Hat Storage] Red Hat OpenShift Data Foundation Reporter: Neha Berry <nberry>
Component: must-gatherAssignee: Rewant <resoni>
Status: CLOSED WONTFIX QA Contact: Elad <ebenahar>
Severity: low Docs Contact:
Priority: unspecified    
Version: 4.9CC: muagarwa, ocs-bugs, odf-bz-bot, sabose
Target Milestone: ---Flags: resoni: needinfo? (nberry)
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2022-02-08 12:48:15 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
terminal log none

Description Neha Berry 2021-10-19 06:42:24 UTC
Created attachment 1834512 [details]
terminal log

Description of problem (please be detailed as possible and provide log
snippests):
============================================================================
This bug is in continuation of Bug 1934625 for the following issue which was not fixed in 4.9. Talked to dev and we proposed to raise a bug and then have a open discussion on this bug with other stakeholders.

Initially it was thought that from ODF 4.9, Noobaa would be independent of the storagecluster resource but that is change is not yet part of 4.9(not sure if planned for 4.10)

Based on that, until a storagecluster is created, do we really need  attempts at collecting following noobaa related stuffs (If yes, ignore this comment)

[must-gather-fghjn] POD 2021-10-18T18:21:55.526374310Z collecting dump of status
[must-gather-fghjn] POD 2021-10-18T18:24:55.539019154Z collecting dump of obc list
[must-gather-fghjn] POD 2021-10-18T18:24:57.008489519Z collecting dump of noobaa
[must-gather-fghjn] POD 2021-10-18T18:24:57.202546579Z collecting dump of backingstore
[must-gather-fghjn] POD 2021-10-18T18:24:57.403956103Z collecting dump of bucketclass
[must-gather-fghjn] POD 2021-10-18T18:24:57.959602253Z collecting dump of noobaa-operator-74f44455d5-wq9vn pod from openshift-storage
[must-gather-fghjn] POD 2021-10-18T18:24:58.319199459Z No resources found in openshift-storage namespace.
[must-gather-fghjn] POD 2021-10-18T18:24:58.506598318Z collecting oc describe command pod noobaa-db-pg-0
[must-gather-fghjn] POD 2021-10-18T18:24:58.729255173Z Error from server (NotFound): pods "noobaa-db-pg-0" not found
[must-gather-fghjn] POD 2021-10-18T18:24:58.735024796Z collecting oc describe command statefulset.apps noobaa-db-pg
[must-gather-fghjn] POD 2021-10-18T18:24:58.935516556Z Error from server (NotFound): statefulsets.apps "noobaa-db-pg" not found


Version of all relevant components (if applicable):
======================================================
oc get csv
oc get NAME                     DISPLAY                       VERSION   REPLACES   PHASE
noobaa-operator.v4.9.0   NooBaa Operator               4.9.0                Succeeded
ocs-operator.v4.9.0      OpenShift Container Storage   4.9.0                Succeeded
odf-operator.v4.9.0      OpenShift Data Foundation     4.9.0                Succeeded
➜  bug-verify-must-gather oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.9.0-0.nightly-2021-10-12-084355   True        False         5d21h   Cluster version is 4.9.0-0.nightly-2021-10-12-084355

Platform : vmware IPI


Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
=====================================================================
No

Is there any workaround available to the best of your knowledge?
==============================================================
NA

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
==================================================================================
2

Can this issue reproducible?
===============================
Always in all versions

Can this issue reproduce from the UI?
==========================================
NA

If this is a regression, please provide more details to justify this:
========================================================================
No, present since the start

Steps to Reproduce:
=====================
1. Install OCS operator. Do not install Storagecluster
2. Initiate a must-gather collection
oc adm must-gather --image=quay.io/rhceph-dev/ocs-must-gather:latest-4.8 |tee terminal-must-gather

oc adm must-gather --image=quay.io/rhceph-dev/ocs-must-gather:latest-4.9 |tee terminal-must-gather-49

Similar observation seen in another reproducer:

1. Install OCS operator and create storagecluster
2. Delete storagecluster and follow uninstall steps(remove OCS completely, along with OCS node label)



Actual results:
====================
Noobaa resources are attempted to be collected which shows in terminal logs

Expected results:
=====================
if storagecluster is not present, and noobaa depends on storagecluster, then we should skip collecting the MCG details.

But we also need to discuss on trade-off between fixing it for current release or we are OK to have it and not affect the existing working code.

Additional info:
======================
More details https://bugzilla.redhat.com/show_bug.cgi?id=1934625#c25

Comment 2 Rewant 2021-11-10 04:51:27 UTC
Jose suggested that We Should collect logs for noobaa resources even if storageCluster is not installed, we still have support for non-UI use cases that deploy standalone NooBaa without a StorageCluster. The whole "dependency" on a StorageCluster is purely a UI thing.

Comment 3 Mudit Agarwal 2022-02-08 12:48:15 UTC
Closing it as WONT_FIX based on https://bugzilla.redhat.com/show_bug.cgi?id=2015408#c2