Description of problem: Version-Release number of selected component (if applicable): ODF 4.9.0-120.ci and OCP 4.9.0-0.nightly-2021-08-25-111423 How reproducible: Steps to Reproduce: 1. Collect must-gather for ODF 4.9 using this command: oc adm must-gather --image=quay.io/rhceph-dev/ocs-must-gather:latest-4.9 2. Observe the collected logs Current must-gather logs can be found here: http://rhsqe-repo.lab.eng.blr.redhat.com/OCS/ocs-qe-bugs/bz-aman/1sept/ Actual results: must-gather doesn't collect some of the odf-operator and storagesystem related logs for odf 4.9 Expected results: All the logs related to re-branding of OCS 4.8 to ODF 4.9 such as odf-operator, storagesystem or any other newly added resource should be collected. Additional info:
There is currently no must-gather functionality for the new stuff coming in with the odf-operator. Thankfully, there's not a lot. We'll only really need the following: * oc get --all-namespaces storagesystem * oc logs odf-operator * oc logs odf-console I think we already gather all Events in the openshift-storage Namespace, so that should include any ODF events.
https://github.com/openshift/ocs-operator/pull/1328. @jrivera and @amagrawa please do have a look.
SetUp: Provider: Vmware ODF Version: 4.9.0 OCP Version:4.9.0-0.nightly-2021-10-08-232649 $ oc get clusterserviceversions -n openshift-storage NAME DISPLAY VERSION REPLACES PHASE noobaa-operator.v4.9.0 NooBaa Operator 4.9.0 Succeeded ocs-operator.v4.9.0 OpenShift Container Storage 4.9.0 Succeeded odf-operator.v4.9.0 OpenShift Data Foundation 4.9.0 Succeeded Test Procedure: 1.Collect must-gather: $ oc adm must-gather --image=quay.io/rhceph-dev/ocs-must-gather:latest-4.9 2.Check storagesysytem and ofd log files: /cluster-scoped-resources/rbac.authorization.k8s.io/clusterrolebindings/odf-operator.v4.9.0-6f89878866.yaml /cluster-scoped-resources/rbac.authorization.k8s.io/clusterroles/storagesystems.odf.openshift.io-v1alpha1-view.yaml /cluster-scoped-resources/rbac.authorization.k8s.io/clusterroles/storagesystems.odf.openshift.io-v1alpha1-edit.yaml /cluster-scoped-resources/rbac.authorization.k8s.io/clusterroles/odf-operator.v4.9.0-6f89878866.yaml /cluster-scoped-resources/rbac.authorization.k8s.io/clusterroles/storagesystems.odf.openshift.io-v1alpha1-crdview.yaml /cluster-scoped-resources/rbac.authorization.k8s.io/clusterroles/odf-operator-metrics-reader.yaml /cluster-scoped-resources/rbac.authorization.k8s.io/clusterroles/storagesystems.odf.openshift.io-v1alpha1-admin.yaml /namespaces/openshift-storage/oc_output/storagesystem.yaml /namespaces/openshift-storage/pods/odf-console-6456bdd688-gb99s/odf-console-6456bdd688-gb99s.yaml /namespaces/openshift-storage/pods/odf-operator-controller-manager-54779dfdbd-ll9f7/odf-operator-controller-manager-54779dfdbd-ll9f7.yaml /namespaces/openshift-storage/operators.coreos.com/clusterserviceversions/odf-operator.v4.9.0.yaml Python script: ``` import os dir_path_4_9 = "/home/odedviner/ClusterPath/auth/must-gather.local.3413281135519036243" for root, dirs, files in os.walk(dir_path_4_9): for file in files: if "odf" in file.lower() or "storagesystem" in file.lower(): print(root+'/'+file) ```
Based on https://bugzilla.redhat.com/show_bug.cgi?id=2000190#c16 move to verified state..
The ODF version $ oc describe csv odf-operator.v4.9.0 | grep 4.9 Name: odf-operator.v4.9.0 Labels: full_version=4.9.0-183.ci
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: Red Hat OpenShift Data Foundation 4.9.0 enhancement, security, and bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:5086