Bug 1878153 - OCS 4.6 must-gather: collect node information under cluster_scoped_resources/oc_output directory
Summary: OCS 4.6 must-gather: collect node information under cluster_scoped_resources/...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Container Storage
Classification: Red Hat Storage
Component: must-gather
Version: 4.6
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: OCS 4.6.0
Assignee: Pulkit Kundra
QA Contact: Neha Berry
URL:
Whiteboard:
: 1890216 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-09-11 13:45 UTC by Neha Berry
Modified: 2020-12-17 06:24 UTC (History)
6 users (show)

Fixed In Version: 4.6.0-142.ci
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-12-17 06:24:14 UTC
Embargoed:


Attachments (Terms of Use)
terminal output from must-gather (64.00 KB, text/plain)
2020-10-21 17:25 UTC, Neha Berry
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Github openshift ocs-operator pull 812 0 None closed Add commands to gather_clusterscoped_resources. 2021-01-18 16:01:30 UTC
Github openshift ocs-operator pull 825 0 None closed bug 1878153: [release-4.6] Add commands to gather_clusterscoped_resources. 2021-01-18 16:00:49 UTC
Github openshift ocs-operator pull 857 0 None closed must-gather: Fix oc get command 2021-01-18 16:00:49 UTC
Github openshift ocs-operator pull 858 0 None closed bug 1878153: [release-4.6] must-gather: Fix oc get command 2021-01-18 16:00:49 UTC
Red Hat Product Errata RHSA-2020:5605 0 None None None 2020-12-17 06:24:35 UTC

Description Neha Berry 2020-09-11 13:45:19 UTC
Description of problem (please be detailed as possible and provide log
snippests):
----------------------------------------------------------
OCS 4.6 Must gather collects "nodes" (oc describe nodes) and "nodes --show-labels" under the openshift-storage namespace instead of the common cluster_scoped_resources folder

Current folder:  "./namespaces/openshift-storage/oc_output/" 
Expected folder:  ./cluster-scoped-resources/oc_output/ folder

Also, it would be helpful to collect following outputs too

a)  oc get nodes -o yaml 
b) oc get nodes -o wide --show-labels (instead of current nodes --show-labels)


Version of all relevant components (if applicable):
-------------------------------------------------
OCS 4.6 must-gather

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
-------------------------
No

Is there any workaround available to the best of your knowledge?
-------------------
NA

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
-----------------------
3

Can this issue reproducible?
------------
yes

Can this issue reproduce from the UI?

------------
NO

If this is a regression, please provide more details to justify this:
---------------
No

Steps to Reproduce:
----------------------
1. Create an OCS 4.6 cluster
2. Run must-gather command
 oc adm must-gather --image=quay.io/rhceph-dev/ocs-must-gather:latest-4.6
3.Check where the node information is collected in the logs. It should ideally be under following hierarchy

"quay-io-rhceph-dev-ocs-must-gather-sha256-558a0e69cdac6ab289724daee736346eb79aaa4aaf37e728f9eee9ff6a670137 / cluster-scoped-resources / oc_output / "


Actual results:
----------------
Collected under "./namespaces/openshift-storage/oc_output/"

Expected results:
----------------
Collect under "./ cluster-scoped-resources / oc_output /". Also, change the oc get nodes --show-labels to include -o wide


Additional info:
-------------------------
jenkins job: https://ocs4-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/qe-deploy-ocs-cluster/12205/console

Must gather: http://magna002.ceph.redhat.com/ocsci-jenkins/openshift-clusters/jnk-pr2696-b2816/jnk-pr2696-b2816_20200911T084402/logs/failed_testcase_ocs_logs_1599814046/test_deployment_ocs_logs/


Must-gather command used :  oc --kubeconfig cluster/auth/kubeconfig adm must-gather --image=quay.io/rhceph-dev/ocs-must-gather:latest-4.6 --dest-dir=/home/jenkins-build/workspace/ocs-ci/logs/failed_testcase_ocs_logs_1599796293/deployment_ocs_logs/ocs_must_gather


-------------------------------------------

>>$ oc get nodes -o wide --show-labels
NAME              STATUS   ROLES    AGE   VERSION                INTERNAL-IP    EXTERNAL-IP    OS-IMAGE                                                       KERNEL-VERSION                 CONTAINER-RUNTIME                                LABELS
compute-0         Ready    worker   34h   v1.19.0-rc.2+fc4c489   10.1.160.158   10.1.160.158   Red Hat Enterprise Linux CoreOS 46.82.202009101640-0 (Ootpa)   4.18.0-193.19.1.el8_2.x86_64   cri-o://1.19.0-11.rhaos4.6.gitf83564f.el8-rc.1   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=compute-0,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,node.openshift.io/os_id=rhcos
compute-1         Ready    worker   34h   v1.19.0-rc.2+fc4c489   10.1.160.151   10.1.160.151   Red Hat Enterprise Linux CoreOS 46.82.202009101640-0 (Ootpa)   4.18.0-193.19.1.el8_2.x86_64   cri-o://1.19.0-11.rhaos4.6.gitf83564f.el8-rc.1   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=compute-1,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,node.openshift.io/os_id=rhcos
compute-2         Ready    worker   34h   v1.19.0-rc.2+fc4c489   10.1.160.156   10.1.160.156   Red Hat Enterprise Linux CoreOS 46.82.202009101640-0 (Ootpa)   4.18.0-193.19.1.el8_2.x86_64   cri-o://1.19.0-11.rhaos4.6.gitf83564f.el8-rc.1   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=compute-2,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,node.openshift.io/os_id=rhcos
control-plane-0   Ready    master   34h   v1.19.0-rc.2+fc4c489   10.1.160.153   10.1.160.153   Red Hat Enterprise Linux CoreOS 46.82.202009101640-0 (Ootpa)   4.18.0-193.19.1.el8_2.x86_64   cri-o://1.19.0-11.rhaos4.6.gitf83564f.el8-rc.1   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=control-plane-0,kubernetes.io/os=linux,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos
control-plane-1   Ready    master   34h   v1.19.0-rc.2+fc4c489   10.1.160.60    10.1.160.60    Red Hat Enterprise Linux CoreOS 46.82.202009101640-0 (Ootpa)   4.18.0-193.19.1.el8_2.x86_64   cri-o://1.19.0-11.rhaos4.6.gitf83564f.el8-rc.1   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=control-plane-1,kubernetes.io/os=linux,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos
control-plane-2   Ready    master   34h   v1.19.0-rc.2+fc4c489   10.1.160.155   10.1.160.155   Red Hat Enterprise Linux CoreOS 46.82.202009101640-0 (Ootpa)   4.18.0-193.19.1.el8_2.x86_64   cri-o://1.19.0-11.rhaos4.6.gitf83564f.el8-rc.1   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=control-plane-2,kubernetes.io/os=linux,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos



>>[nberry@localhost logs]$ oc get nodes --show-labels
NAME              STATUS   ROLES    AGE   VERSION                LABELS
compute-0         Ready    worker   34h   v1.19.0-rc.2+fc4c489   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=compute-0,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,node.openshift.io/os_id=rhcos
compute-1         Ready    worker   34h   v1.19.0-rc.2+fc4c489   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=compute-1,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,node.openshift.io/os_id=rhcos
compute-2         Ready    worker   34h   v1.19.0-rc.2+fc4c489   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=compute-2,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,node.openshift.io/os_id=rhcos
control-plane-0   Ready    master   34h   v1.19.0-rc.2+fc4c489   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=control-plane-0,kubernetes.io/os=linux,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos
control-plane-1   Ready    master   34h   v1.19.0-rc.2+fc4c489   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=control-plane-1,kubernetes.io/os=linux,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos
control-plane-2   Ready    master   34h   v1.19.0-rc.2+fc4c489   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=control-plane-2,kubernetes.io/os=linux,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos

Comment 6 Neha Berry 2020-10-07 10:00:29 UTC
Hi Pulkit,

Do you plan to fix this BZ in OCS 4.6 ? IIRC from our offline discussion, it was a small change in moving the file to another folder. Let me know.

Comment 11 Neha Berry 2020-10-21 17:18:27 UTC
*** Bug 1890216 has been marked as a duplicate of this bug. ***

Comment 12 Neha Berry 2020-10-21 17:25:49 UTC
Created attachment 1723268 [details]
terminal output from must-gather

Checked in the latest OCS 4.6.0-137.ci build and the command to collect "oc get nodes --show-labels" is failing.

a) the oc get nodes --show-labels collection is failing  and is neither collected under namespaces/openshift-storage/oc_output (original) , nor under cluster-scoped-resources/oc_output/


[must-gather-gn74s] POD collecting oc command sc
[must-gather-gn74s] POD collecting oc command nodes -o wide --show-labels
>> [must-gather-gn74s] POD error: the server doesn't have a resource type "nodes -o wide --show-labels"
[must-gather-gn74s] POD collecting oc command clusterversion
[must-gather-gn74s] POD collecting oc command infrastructures.config

Sample must-gather from OCS 4.6 internal mode cluster 

http://rhsqe-repo.lab.eng.blr.redhat.com/OCS/ocs-qe-bugs/1890183/must-gather.local.7263335647167393572/quay-io-rhceph-dev-ocs-must-gather-sha256-3255bdcfc54ce04e8b0b948cc2d6e4ba5e7fbd2ca14dc8512d5c845e4a9ae157/

Version of all relevant components (if applicable):
--------------------------------------------
OCS  = ocs-operator.v4.6.0-137.ci

the command actually works on the cluster, but seems to be failing during must-gather


$ oc get nodes -o wide --show-labels
NAME                      STATUS   ROLES    AGE   VERSION           INTERNAL-IP    EXTERNAL-IP   OS-IMAGE                                                       KERNEL-VERSION                 CONTAINER-RUNTIME                           LABELS
argo002.ceph.redhat.com   Ready    master   9h    v1.19.0+d59ce34   10.8.128.202   <none>        Red Hat Enterprise Linux CoreOS 46.82.202010201440-0 (Ootpa)   4.18.0-193.28.1.el8_2.x86_64   cri-o://1.19.0-22.rhaos4.6.gitc0306f1.el8   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=argo002.ceph.redhat.com,kubernetes.io/os=linux,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos
argo003.ceph.redhat.com   Ready    master   9h    v1.19.0+d59ce34   10.8.128.203   <none>        Red Hat Enterprise Linux CoreOS 46.82.202010201440-0 (Ootpa)   4.18.0-193.28.1.el8_2.x86_64   cri-o://1.19.0-22.rhaos4.6.gitc0306f1.el8   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=argo003.ceph.redhat.com,kubernetes.io/os=linux,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos
argo004.ceph.redhat.com   Ready    master   9h    v1.19.0+d59ce34   10.8.128.204   <none>        Red Hat Enterprise Linux CoreOS 46.82.202010201440-0 (Ootpa)   4.18.0-193.28.1.el8_2.x86_64   cri-o://1.19.0-22.rhaos4.6.gitc0306f1.el8   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=argo004.ceph.redhat.com,kubernetes.io/os=linux,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos
argo005.ceph.redhat.com   Ready    worker   9h    v1.19.0+d59ce34   10.8.128.205   <none>        Red Hat Enterprise Linux CoreOS 46.82.202010201440-0 (Ootpa)   4.18.0-193.28.1.el8_2.x86_64   cri-o://1.19.0-22.rhaos4.6.gitc0306f1.el8   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,cluster.ocs.openshift.io/openshift-storage=,kubernetes.io/arch=amd64,kubernetes.io/hostname=argo005.ceph.redhat.com,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,node.openshift.io/os_id=rhcos,topology.rook.io/rack=rack2
argo006.ceph.redhat.com   Ready    worker   9h    v1.19.0+d59ce34   10.8.128.206   <none>        Red Hat Enterprise Linux CoreOS 46.82.202010201440-0 (Ootpa)   4.18.0-193.28.1.el8_2.x86_64   cri-o://1.19.0-22.rhaos4.6.gitc0306f1.el8   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,cluster.ocs.openshift.io/openshift-storage=,kubernetes.io/arch=amd64,kubernetes.io/hostname=argo006.ceph.redhat.com,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,node.openshift.io/os_id=rhcos,topology.rook.io/rack=rack0
argo007.ceph.redhat.com   Ready    worker   9h    v1.19.0+d59ce34   10.8.128.207   <none>        Red Hat Enterprise Linux CoreOS 46.82.202010201440-0 (Ootpa)   4.18.0-193.28.1.el8_2.x86_64   cri-o://1.19.0-22.rhaos4.6.gitc0306f1.el8   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,cluster.ocs.openshift.io/openshift-storage=,kubernetes.io/arch=amd64,kubernetes.io/hostname=argo007.ceph.redhat.com,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,node.openshift.io/os_id=rhcos,topology.rook.io/rack=rack1

________________________________________________________________

BTW, Verified the other node related files are now moved under "./cluster-scoped-resources" directory and are present as expected.

a) Nodes in yaml - [1]

b) Describe of nodes -[2]


[1] - http://rhsqe-repo.lab.eng.blr.redhat.com/OCS/ocs-qe-bugs/1890183/must-gather.local.7263335647167393572/quay-io-rhceph-dev-ocs-must-gather-sha256-3255bdcfc54ce04e8b0b948cc2d6e4ba5e7fbd2ca14dc8512d5c845e4a9ae157/cluster-scoped-resources/core/nodes/

[2] - http://rhsqe-repo.lab.eng.blr.redhat.com/OCS/ocs-qe-bugs/1890183/must-gather.local.7263335647167393572/quay-io-rhceph-dev-ocs-must-gather-sha256-3255bdcfc54ce04e8b0b948cc2d6e4ba5e7fbd2ca14dc8512d5c845e4a9ae157/cluster-scoped-resources/oc_output/desc_nodes

Comment 14 Pulkit Kundra 2020-10-22 11:10:15 UTC
BP PR https://github.com/openshift/ocs-operator/pull/858

Comment 15 Neha Berry 2020-10-28 18:15:25 UTC
Thanks Pulkit

Verified the fix on OCS  = ocs-operator.v4.6.0-147.ci and OCP = 4.6.0-0.nightly-2020-10-22-034051. 

Observation: The `oc get nodes -o wide --show-labels` is working now and "the server doesn't have a resource type "nodes -o wide --show-labels"" is no longer seen in terminal


>>$ oc adm must-gather --image=quay.io/rhceph-dev/ocs-must-gather:latest-4.6 |tee terminal-must-gather

Starting pod/control-plane-2-debug ...
To use host binaries, run `chroot /host`
quay.io/rhceph-dev/ocs-must-gather               latest-4.6                          a0c951853a5f   17 hours ago   402 MB

...
...


[must-gather-qwghc] POD collecting oc command sc
[must-gather-qwghc] POD collecting oc command nodes -o wide --show-labels
[must-gather-qwghc] POD collecting oc command clusterversion
[must-gather-qwghc] POD collecting oc command infrastructures.config


-------------------------------------------------------

Logs
=========

>> Describe and oc get nodes 

$ ls -ltrh must-gather.local.6594167730655910032/quay-io-rhceph-dev-ocs-must-gather-sha256-3ce7cfc0a70f533270e9918895844cda82bdfc7e0e1850f34daa6cd58d008083/cluster-scoped-resources/oc_output |grep nodes
-rw-r--r--. 1 nberry nberry 3.0K Oct 28 23:21 get_nodes_-o_wide_--show-labels
-rw-r--r--. 1 nberry nberry  57K Oct 28 23:21 desc_nodes


>> nodes in yaml
$ ls -ltrh must-gather.local.6594167730655910032/quay-io-rhceph-dev-ocs-must-gather-sha256-3ce7cfc0a70f533270e9918895844cda82bdfc7e0e1850f34daa6cd58d008083/cluster-scoped-resources/core/nodes/
total 120K
-rwxr-xr-x. 1 nberry nberry 19K Oct 28 23:21 compute-0.yaml
-rwxr-xr-x. 1 nberry nberry 17K Oct 28 23:21 compute-1.yaml
-rwxr-xr-x. 1 nberry nberry 19K Oct 28 23:21 compute-2.yaml
-rwxr-xr-x. 1 nberry nberry 18K Oct 28 23:21 control-plane-0.yaml
-rwxr-xr-x. 1 nberry nberry 18K Oct 28 23:21 control-plane-1.yaml
-rwxr-xr-x. 1 nberry nberry 18K Oct 28 23:21 control-plane-2.yaml



$ cat must-gather.local.6594167730655910032/quay-io-rhceph-dev-ocs-must-gather-sha256-3ce7cfc0a70f533270e9918895844cda82bdfc7e0e1850f34daa6cd58d008083/cluster-scoped-resources/oc_output/get_nodes_-o_wide_--show-labels
NAME              STATUS   ROLES    AGE   VERSION           INTERNAL-IP    EXTERNAL-IP    OS-IMAGE                                                       KERNEL-VERSION                     CONTAINER-RUNTIME                           LABELS
compute-0         Ready    worker   13d   v1.19.0+d59ce34   10.1.160.165   10.1.160.165   Red Hat Enterprise Linux CoreOS 46.82.202010091720-0 (Ootpa)   4.18.0-193.24.1.el8_2.dt1.x86_64   cri-o://1.19.0-20.rhaos4.6.git97d715e.el8   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,cluster.ocs.openshift.io/openshift-storage=,kubernetes.io/arch=amd64,kubernetes.io/hostname=compute-0,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,node.openshift.io/os_id=rhcos,topology.rook.io/rack=rack0
compute-1         Ready    worker   13d   v1.19.0+d59ce34   10.1.160.161   10.1.160.161   Red Hat Enterprise Linux CoreOS 46.82.202010091720-0 (Ootpa)   4.18.0-193.24.1.el8_2.dt1.x86_64   cri-o://1.19.0-20.rhaos4.6.git97d715e.el8   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,cluster.ocs.openshift.io/openshift-storage=,kubernetes.io/arch=amd64,kubernetes.io/hostname=compute-1,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,node.openshift.io/os_id=rhcos,topology.rook.io/rack=rack1
compute-2         Ready    worker   13d   v1.19.0+d59ce34   10.1.160.180   10.1.160.180   Red Hat Enterprise Linux CoreOS 46.82.202010091720-0 (Ootpa)   4.18.0-193.24.1.el8_2.dt1.x86_64   cri-o://1.19.0-20.rhaos4.6.git97d715e.el8   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,cluster.ocs.openshift.io/openshift-storage=,kubernetes.io/arch=amd64,kubernetes.io/hostname=compute-2,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,node.openshift.io/os_id=rhcos,topology.rook.io/rack=rack2
control-plane-0   Ready    master   13d   v1.19.0+d59ce34   10.1.160.163   10.1.160.163   Red Hat Enterprise Linux CoreOS 46.82.202010091720-0 (Ootpa)   4.18.0-193.24.1.el8_2.dt1.x86_64   cri-o://1.19.0-20.rhaos4.6.git97d715e.el8   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=control-plane-0,kubernetes.io/os=linux,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos
control-plane-1   Ready    master   13d   v1.19.0+d59ce34   10.1.160.166   10.1.160.166   Red Hat Enterprise Linux CoreOS 46.82.202010091720-0 (Ootpa)   4.18.0-193.24.1.el8_2.dt1.x86_64   cri-o://1.19.0-20.rhaos4.6.git97d715e.el8   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=control-plane-1,kubernetes.io/os=linux,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos
control-plane-2   Ready    master   13d   v1.19.0+d59ce34   10.1.160.162   10.1.160.162   Red Hat Enterprise Linux CoreOS 46.82.202010091720-0 (Ootpa)   4.18.0-193.24.1.el8_2.dt1.x86_64   cri-o://1.19.0-20.rhaos4.6.git97d715e.el8   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=control-plane-2,kubernetes.io/os=linux,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos


Moving the BZ to verified state as all node related information are now collected under cluster_scoped directory.

Comment 18 errata-xmlrpc 2020-12-17 06:24:14 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat OpenShift Container Storage 4.6.0 security, bug fix, enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:5605


Note You need to log in before you can comment on or make changes to this bug.