Bug 1705712 - Unexpected Multus error messages
Summary: Unexpected Multus error messages
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Networking
Version: 4.1.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 4.2.0
Assignee: Douglas Smith
QA Contact: zhaozhanqi
URL:
Whiteboard:
Depends On:
Blocks: 2027770
TreeView+ depends on / blocked
 
Reported: 2019-05-02 19:44 UTC by Clayton Coleman
Modified: 2021-11-30 15:34 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 2027770 (view as bug list)
Environment:
Last Closed: 2019-10-16 06:28:21 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift multus-cni pull 13 0 'None' closed [bugfix] Skip clearing the network status annotation if the pod sandbox is not found 2021-02-10 20:39:50 UTC
Red Hat Product Errata RHBA-2019:2922 0 None None None 2019-10-16 06:28:33 UTC

Description Clayton Coleman 2019-05-02 19:44:51 UTC
Unexplained network errors from cultus during pod teardown.

This is a run once pod that I think has just exited.  I see a chunk of errors like this in the logs.  What do these mean, why are they being printed, and do they need to be printed?  Are they are problem?

May 02 19:26:00 ip-10-0-133-133 crio[931]: 2019-05-02T19:26:00Z [verbose] Del: openshift-kube-controller-manager:installer-6-ip-10-0-133-133.ec2.internal:openshift-sdn:eth0 {"cniVersion":"0.3.1","name":"openshift-sdn","type":"openshift-sdn"}
May 02 19:28:00 ip-10-0-133-133 crio[931]: 2019-05-02T19:28:00Z [error] SetNetworkStatus: failed to query the pod revision-pruner-6-ip-10-0-133-133.ec2.internal in out of cluster comm: pods "revision-pruner-6-ip-10-0-133-133.ec2.internal" not found
May 02 19:28:00 ip-10-0-133-133 crio[931]: 2019-05-02T19:28:00Z [error] Multus: Err unset the networks status: SetNetworkStatus: failed to query the pod revision-pruner-6-ip-10-0-133-133.ec2.internal in out of cluster comm: pods "revision-pruner-6-ip-10-0-133-133.ec2.internal" not found

If these aren't indicative of a problem, the severity of this is a medium.

Comment 1 Casey Callendrello 2019-05-03 12:14:29 UTC
I believe these to be essentially spurious error messages, since it's deleting a pod that is already deleted in the apiserver.

It would be good to clean these up, but it's not urgent. Optimistically setting this to 4.1.x.

Comment 2 Douglas Smith 2019-05-03 12:16:43 UTC
Thanks for filing this up. Agreed on medium priority and I think decent likelihood of getting int 4.1.x.

I have an upstream issue to track as well: https://github.com/intel/multus-cni/issues/310

Comment 3 Douglas Smith 2019-05-03 13:35:36 UTC
I've got an upstream pull request available: https://github.com/intel/multus-cni/pull/311

Getting some reviews, and I'll bring downstream once the patch lands.

Comment 4 Douglas Smith 2019-05-03 14:15:16 UTC
It's approved upstream, and I have the downstream PR here @ https://github.com/openshift/multus-cni/pull/13

Comment 7 zhaozhanqi 2019-08-27 05:33:08 UTC
Verified this bug on 4.2.0-0.nightly-2019-08-25-233755

1.Create some pod and delete them
2. Check the multus pod logs and did not find the related error logs.

Comment 8 errata-xmlrpc 2019-10-16 06:28:21 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:2922


Note You need to log in before you can comment on or make changes to this bug.