Unexplained network errors from cultus during pod teardown. This is a run once pod that I think has just exited. I see a chunk of errors like this in the logs. What do these mean, why are they being printed, and do they need to be printed? Are they are problem? May 02 19:26:00 ip-10-0-133-133 crio[931]: 2019-05-02T19:26:00Z [verbose] Del: openshift-kube-controller-manager:installer-6-ip-10-0-133-133.ec2.internal:openshift-sdn:eth0 {"cniVersion":"0.3.1","name":"openshift-sdn","type":"openshift-sdn"} May 02 19:28:00 ip-10-0-133-133 crio[931]: 2019-05-02T19:28:00Z [error] SetNetworkStatus: failed to query the pod revision-pruner-6-ip-10-0-133-133.ec2.internal in out of cluster comm: pods "revision-pruner-6-ip-10-0-133-133.ec2.internal" not found May 02 19:28:00 ip-10-0-133-133 crio[931]: 2019-05-02T19:28:00Z [error] Multus: Err unset the networks status: SetNetworkStatus: failed to query the pod revision-pruner-6-ip-10-0-133-133.ec2.internal in out of cluster comm: pods "revision-pruner-6-ip-10-0-133-133.ec2.internal" not found If these aren't indicative of a problem, the severity of this is a medium.
I believe these to be essentially spurious error messages, since it's deleting a pod that is already deleted in the apiserver. It would be good to clean these up, but it's not urgent. Optimistically setting this to 4.1.x.
Thanks for filing this up. Agreed on medium priority and I think decent likelihood of getting int 4.1.x. I have an upstream issue to track as well: https://github.com/intel/multus-cni/issues/310
I've got an upstream pull request available: https://github.com/intel/multus-cni/pull/311 Getting some reviews, and I'll bring downstream once the patch lands.
It's approved upstream, and I have the downstream PR here @ https://github.com/openshift/multus-cni/pull/13
Verified this bug on 4.2.0-0.nightly-2019-08-25-233755 1.Create some pod and delete them 2. Check the multus pod logs and did not find the related error logs.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:2922