+++ This bug was initially created as a clone of Bug #2218309 +++ Description of problem (please be detailed as possible and provide log snippests): The ODF must-gather does not collect NetworkAttachmentDefinitions that affect Multus. The must-gather should be updated to collect NetworkAttachmentDefinition resources from the "default" and "openshift-storage" namespaces. Version of all relevant components (if applicable): I would hope this can be part of an upcoming 4.13.z release. Does this issue impact your ability to continue to work with the product (please explain in detail what is the user impact)? No. Is there any workaround available to the best of your knowledge? OCP must-gather collects NetworkAttachmentDefinitions Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? 1 Can this issue reproducible? Yes. Can this issue reproduce from the UI? N/A If this is a regression, please provide more details to justify this: N/A Steps to Reproduce: It is not necessary to install ODF with multus enabled to repro! First, create these two NetworkAttachmentDefinitions: apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: cluster-net namespace: openshift-storage spec: config: '{ "cniVersion": "0.3.1", "type": "macvlan", "master": "br-ex", "mode": "bridge", "ipam": { "type": "whereabouts", "range": "192.168.30.0/24" } }' apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: cluster-net spec: config: '{ "cniVersion": "0.3.1", "type": "macvlan", "master": "br-ex", "mode": "bridge", "ipam": { "type": "whereabouts", "range": "192.168.30.0/24" } }' Next, collect an ODF must-gather Both NADs should be collected but are not. The first NAD is in the openshift-storage NS, and the second is in the default NS. NADs from other NSes do not need to be collected for ODF. --- Additional comment from RHEL Program Management on 2023-06-28 17:41:19 UTC --- This bug having no release flag set previously, is now set with release flag 'odf‑4.14.0' to '?', and so is being proposed to be fixed at the ODF 4.14.0 release. Note that the 3 Acks (pm_ack, devel_ack, qa_ack), if any previously set while release flag was missing, have now been reset since the Acks are to be set against a release flag. --- Additional comment from Blaine Gardner on 2023-07-19 17:15:08 UTC --- I see this is merged into the m-g main codebase and is in 4.14. @etamir would you like to target a 4.13.z backport as well? What about 4.12.z for any support exceptions? --- Additional comment from Red Hat Bugzilla on 2023-08-03 08:28:10 UTC --- Account disabled by LDAP Audit --- Additional comment from Eran Tamir on 2023-08-15 16:22:03 UTC --- Yes. It makes sense to backport to 4.13 and also 4.12 (lower priority) --- Additional comment from Eran Tamir on 2023-08-15 16:22:23 UTC --- Yes. It makes sense to backport to 4.13 and also 4.12 (lower priority) --- Additional comment from Blaine Gardner on 2023-08-15 17:42:03 UTC --- @ypadia would you be able to take the action to make sure backport BZs get created for 4.13.z and 4.12.z? I'd like to get this off of my todo list to keep focusing on Rook BZs. Thanks :) --- Additional comment from Mudit Agarwal on 2023-08-16 04:35:55 UTC --- Yati, please create clones for 4.13/4.12. Reach out to Sunil for adding the backports to the respective z-streams. Oded, please provide qa_ack --- Additional comment from yati padia on 2023-08-16 05:50:46 UTC --- Added the backport PR for both 4.13 and 4.12 --- Additional comment from Mudit Agarwal on 2023-08-17 05:07:45 UTC --- Yati, please create clones bugs for 4.13/4.12
Bug Fixed Setup: OCP Version: 4.13.0-0.nightly-2023-10-13-013258 ODF Versoin: 4.13.4-3 Platform: Vsphere 1.Deploy cluster with multus $ oc get storagecluster -o yaml ``` network: multiClusterService: {} provider: multus selectors: cluster: openshift-storage/private-net public: openshift-storage/public-net ``` 2. Run MG command oc adm must-gather --image=quay.io/rhceph-dev/ocs-must-gather:latest-4.13 3.Verify "desc_net_attach_def_all_ns" and "get_yaml_net_attach_def_all_ns" files exist on "/namespaces/all" path $ pwd ../namespaces/all oviner:all$ ls | grep net desc_net_attach_def_all_ns get_yaml_net_attach_def_all_ns $ cat get_yaml_net_attach_def_all_ns apiVersion: v1 items: - apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: creationTimestamp: "2023-10-13T14:38:05Z" generation: 1 name: private-net namespace: openshift-storage resourceVersion: "36118" uid: 2f9e18a8-1662-4f9f-bcea-c73eeb17a94b spec: config: '{"cniVersion": "0.3.1", "type": "macvlan", "master": "br-ex", "mode": "bridge", "ipam": {"type": "whereabouts", "range": "192.168.30.0/24"}}' - apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: creationTimestamp: "2023-10-13T14:38:05Z" generation: 1 name: public-net namespace: openshift-storage resourceVersion: "36115" uid: 0c40bd94-26f3-4383-b86f-ab6daca2146e spec: config: '{"cniVersion": "0.3.1", "type": "macvlan", "master": "br-ex", "mode": "bridge", "ipam": {"type": "whereabouts", "range": "192.168.20.0/24"}}' kind: List metadata: resourceVersion: "" selfLink: "" oviner:all$
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat OpenShift Data Foundation 4.13.4 security and bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2023:6146