Bug 1878289 - [Debugging enhancement] connectivity check events have ambiguous related object
Summary: [Debugging enhancement] connectivity check events have ambiguous related object
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: kube-apiserver
Version: 4.6
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 4.6.0
Assignee: Luis Sanchez
QA Contact: Xingxing Xia
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-09-11 19:34 UTC by Luis Sanchez
Modified: 2020-10-27 16:40 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-10-27 16:40:12 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift cluster-kube-apiserver-operator pull 934 0 None open Bug 1878289: connectivity check events have ambiguous related object 2020-09-21 14:23:39 UTC
Red Hat Product Errata RHBA-2020:4196 0 None None None 2020-10-27 16:40:34 UTC

Description Luis Sanchez 2020-09-11 19:34:40 UTC
Description of problem:

ConnectivityOutageDetected and ConnectivityRestored events do not always have a related object that helps determine where the problem is occurring.

For example, the events from kube-apiserver pod sometimes list the namespace as the related object, which is not as helpful as listing the node, which would help narrow the source to the pod running on that node. 

Events from the openshift-apiserver list the apiserver deployment, which itself offers no insight into which of the pods is having the connectivity issue.

Comment 1 Luis Sanchez 2020-09-11 19:38:07 UTC
I will normalize the connectivity events to always specify the source pod's node as the relatedObject. While it might seem that the pod name is a more natural fit for this purpose, the auto-generated pod names that result from a deployment do not offer a hint of which node is experiencing the issue, especially if the pod has been re-created in the meantime.

Comment 3 Xingxing Xia 2020-09-24 12:28:25 UTC
Tested in 4.6.0-0.nightly-2020-09-24-095222:
$ oc get event -n openshift-kube-apiserver -o yaml

Saw involvedObject has node name for ConnectivityOutageDetected and ConnectivityRestored:
  involvedObject:
    kind: Node
    name: ip-10-0-154-169.ap-northeast-1.compute.internal
    namespace: openshift-kube-apiserver
    uid: 22be2d53-eb65-4309-9ace-7bb4c8cefb5f
  kind: Event
  ...
  reason: ConnectivityOutageDetected
...
  involvedObject:
    kind: Node
    name: ip-10-0-154-169.ap-northeast-1.compute.internal
    namespace: openshift-kube-apiserver
    uid: 22be2d53-eb65-4309-9ace-7bb4c8cefb5f
  kind: Event
  ...
  reason: ConnectivityRestored

Same for OAS:
$ oc get event -n openshift-apiserver -o yaml
  involvedObject:
    kind: Node
    name: ip-10-0-154-169.ap-northeast-1.compute.internal
    namespace: openshift-apiserver
    uid: 22be2d53-eb65-4309-9ace-7bb4c8cefb5f
  kind: Event
  ...
  reason: ConnectivityRestored

Comment 6 errata-xmlrpc 2020-10-27 16:40:12 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Container Platform 4.6 GA Images), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:4196


Note You need to log in before you can comment on or make changes to this bug.