Bug 1924634

Summary: MG terminal logs show `pods "compute-x-debug" not found` even though pods are in Running state
Product: [Red Hat Storage] Red Hat OpenShift Container Storage Reporter: Neha Berry <nberry>
Component: must-gatherAssignee: Pulkit Kundra <pkundra>
Status: CLOSED ERRATA QA Contact: Tiffany Nguyen <tunguyen>
Severity: high Docs Contact:
Priority: unspecified    
Version: 4.7CC: ebenahar, muagarwa, ocs-bugs, sabose, tunguyen
Target Milestone: ---Keywords: AutomationBackLog, Regression
Target Release: OCS 4.7.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: 4.7.0-721.ci Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-05-19 09:18:58 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
terminal output
none
console output none

Description Neha Berry 2021-02-03 11:01:57 UTC
Created attachment 1754668 [details]
terminal output

Description of problem (please be detailed as possible and provide log
snippests):
=====================================================================
Observed in all recent 4.7.0 OCS builds that on initiating must-gather, the terminal log keeps repeating following messages for 300 sec, even though the concerned pods are actually in Running state.

[must-gather-vzz6q] POD Error from server (NotFound): pods "compute-0-debug" not found
[must-gather-vzz6q] POD Error from server (NotFound): pods "compute-1-debug" not found
[must-gather-vzz6q] POD Error from server (NotFound): pods "compute-2-debug" not found
[must-gather-vzz6q] POD waiting for helper pod and debug pod for 300 seconds


NAME                                                                  READY   STATUS      RESTARTS   AGE
pod/compute-0-debug                                                   1/1     Running     0          3m32s
pod/compute-1-debug                                                   1/1     Running     0          3m32s
pod/compute-2-debug                                                   1/1     Running     0          3m32s


Version of all relevant components (if applicable):
=====================================================
OCS  = 4.7.0-249.ci and 4.7.0-241.ci too

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
===========================================================================
Not sure as must-gather collection eventually succeeds. but not sure if any log fails to get connected because of this issue

Is there any workaround available to the best of your knowledge?
====================================================================
No idea

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
==========================================================================
3

Can this issue reproducible?
===============================
Yes. Both on LSO and dynamic clusters.


Can this issue reproduce from the UI?
===========================================
NA

If this is a regression, please provide more details to justify this:
=======================================================================
Probably yes.

Steps to Reproduce:
=====================
1. Installed OCS 4.7.0-249.ci
2. Run ocs-must-gather:

$ oc adm must-gather --image=quay.io/rhceph-dev/ocs-must-gather:latest-4.7 |tee terminal-must-gather2

3. Check for the above messages in the terminal or in the collected file.


Actual results:
===================
Above messages appear even when PODs are running , till 300s


Expected results:
======================
Should not appear once the PODs come in running state


Additional info:
====================
$ oc adm must-gather --image=quay.io/rhceph-dev/ocs-must-gather:latest-4.7 |tee terminal-must-gather2
[must-gather      ] OUT Using must-gather plug-in image: quay.io/rhceph-dev/ocs-must-gather:latest-4.7
[must-gather      ] OUT namespace/openshift-must-gather-7fh8m created
[must-gather      ] OUT clusterrolebinding.rbac.authorization.k8s.io/must-gather-tvqtc created
[must-gather      ] OUT pod for plug-in image quay.io/rhceph-dev/ocs-must-gather:latest-4.7 created
[must-gather-vzz6q] POD pod/must-gather-vzz6q-helper created
[must-gather-vzz6q] POD debugging node compute-0 
[must-gather-vzz6q] POD debugging node compute-1 
[must-gather-vzz6q] POD debugging node compute-2 
[must-gather-vzz6q] POD Starting pod/compute-1-debug ...
[must-gather-vzz6q] POD To use host binaries, run `chroot /host`
[must-gather-vzz6q] POD Starting pod/compute-2-debug ...
[must-gather-vzz6q] POD To use host binaries, run `chroot /host`
[must-gather-vzz6q] POD pod/must-gather-vzz6q-helper labeled
[must-gather-vzz6q] POD Starting pod/compute-0-debug ...
[must-gather-vzz6q] POD To use host binaries, run `chroot /host`
[must-gather-vzz6q] POD waiting for the compute-0-debug pod to be in ready state
[must-gather-vzz6q] POD waiting for helper pod and debug pod for 0 seconds
[must-gather-vzz6q] POD Error from server (NotFound): pods "compute-0-debug" not found
[must-gather-vzz6q] POD Error from server (NotFound): pods "compute-1-debug" not found
[must-gather-vzz6q] POD Error from server (NotFound): pods "compute-2-debug" not found

...


[must-gather-vzz6q] POD Error from server (NotFound): pods "compute-0-debug" not found
[must-gather-vzz6q] POD Error from server (NotFound): pods "compute-1-debug" not found
[must-gather-vzz6q] POD Error from server (NotFound): pods "compute-2-debug" not found
[must-gather-vzz6q] POD waiting for helper pod and debug pod for 300 seconds

Comment 5 Tiffany Nguyen 2021-02-10 01:49:18 UTC
Verified with build ocs-operator.v4.7.0-250.ci.  
There is no error of pods "compute-x-debug" not found when generating mus-gather logs.

Comment 6 Tiffany Nguyen 2021-02-10 01:50:18 UTC
Created attachment 1756081 [details]
console output

Comment 10 errata-xmlrpc 2021-05-19 09:18:58 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat OpenShift Container Storage 4.7.0 security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:2041