Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1782510

Summary: ovnkube-master should retry setting node annotations
Product: OpenShift Container Platform Reporter: Russell Bryant <rbryant>
Component: NetworkingAssignee: Alexander Constantinescu <aconstan>
Networking sub component: ovn-kubernetes QA Contact: Anurag saxena <anusaxen>
Status: CLOSED ERRATA Docs Contact:
Severity: unspecified    
Priority: unspecified CC: dcbw, eparis, rbrattai
Version: 4.3.0   
Target Milestone: ---   
Target Release: 4.4.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-05-13 21:54:50 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Russell Bryant 2019-12-11 18:16:51 UTC
Description of problem:

I got access to a 50-node AWS cluster from eparis (IPv4, not my IPv6 work) where OVN was not working correctly.  One of the issues I noticed was an ovnkube-node pod that had restarted over 100 times, because it can never fully initialize.

In the ovnkube-node log, I see lots of messages like this:

time="2019-12-11T18:03:44Z" level=error msg="Error while obtaining gateway router addresses for ip-10-0-153-96.ec2.internal - OVN command '/usr/bin/ovn-nbctl --private-key=/ovn-cert/tls.key --certificate=/ovn-cert/tls.crt --bootstrap-ca-cert=/ovn-ca/ca-bundle.crt --db=ssl:10.0.141.101:9641,ssl:10.0.154.146:9641,ssl:10.0.162.62:9641 --timeout=15 lsp-get-addresses etor-GR_ip-10-0-153-96.ec2.internal' failed: exit status 1"
time="2019-12-11T18:03:44Z" level=error msg="Error while obtaining addresses for k8s-ip-10-0-153-96.ec2.internal on node ip-10-0-153-96.ec2.internal - Error while obtaining dynamic addresses for k8s-ip-10-0-153-96.ec2.internal: OVN command '/usr/bin/ovn-nbctl --private-key=/ovn-cert/tls.key --certificate=/ovn-cert/tls.crt --bootstrap-ca-cert=/ovn-ca/ca-bundle.crt --db=ssl:10.0.141.101:9641,ssl:10.0.154.146:9641,ssl:10.0.162.62:9641 --timeout=15 get logical_switch_port k8s-ip-10-0-153-96.ec2.internal dynamic_addresses' failed: exit status 1"


This is normal, at least for a while.  See https://bugzilla.redhat.com/show_bug.cgi?id=1779464

This occurs while it's waiting for some Node initialization by ovnkube-master.


Checking the ovnkube-master logs, these are the only hints I could find:

time="2019-12-11T02:39:35Z" level=info msg="Allocated node ip-10-0-153-96.ec2.internal HostSubnet 10.128.3.0/26"                                                                                                                                          
time="2019-12-11T02:39:35Z" level=info msg="Setting annotations ovn_host_subnet=10.128.3.0/26 on node ip-10-0-153-96.ec2.internal"
...
time="2019-12-11T02:39:35Z" level=error msg="macAddress annotation not found for node \"ip-10-0-153-96.ec2.internal\" "
...
time="2019-12-11T02:39:39Z" level=error msg="error update Node Management Port for node ip-10-0-153-96.ec2.internal: Error in obtaining host subnet for node \"ip-10-0-153-96.ec2.internal\" for deletion"


It looks like there was some failure in setting up this Node and then it never tried again.


Version-Release number of selected component (if applicable):

4.3.0-0.nightly-2019-12-10-120829

Comment 1 Casey Callendrello 2019-12-12 16:07:27 UTC
Alexander, can you look at this?

I suspect this has been fixed with some of Dan's patches (that haven't merged yet), but I'm not 100% sure.

Comment 4 Ross Brattain 2020-01-20 23:35:50 UTC
Unable to reproduce on 4.4.0-0.nightly-2020-01-20-103903

rebooted a master and then node, unable to find similar messages in the ovnkube-master and ovnkube-node logs

Comment 6 errata-xmlrpc 2020-05-13 21:54:50 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:0581