Description of problem: I got access to a 50-node AWS cluster from eparis (IPv4, not my IPv6 work) where OVN was not working correctly. One of the issues I noticed was an ovnkube-node pod that had restarted over 100 times, because it can never fully initialize. In the ovnkube-node log, I see lots of messages like this: time="2019-12-11T18:03:44Z" level=error msg="Error while obtaining gateway router addresses for ip-10-0-153-96.ec2.internal - OVN command '/usr/bin/ovn-nbctl --private-key=/ovn-cert/tls.key --certificate=/ovn-cert/tls.crt --bootstrap-ca-cert=/ovn-ca/ca-bundle.crt --db=ssl:10.0.141.101:9641,ssl:10.0.154.146:9641,ssl:10.0.162.62:9641 --timeout=15 lsp-get-addresses etor-GR_ip-10-0-153-96.ec2.internal' failed: exit status 1" time="2019-12-11T18:03:44Z" level=error msg="Error while obtaining addresses for k8s-ip-10-0-153-96.ec2.internal on node ip-10-0-153-96.ec2.internal - Error while obtaining dynamic addresses for k8s-ip-10-0-153-96.ec2.internal: OVN command '/usr/bin/ovn-nbctl --private-key=/ovn-cert/tls.key --certificate=/ovn-cert/tls.crt --bootstrap-ca-cert=/ovn-ca/ca-bundle.crt --db=ssl:10.0.141.101:9641,ssl:10.0.154.146:9641,ssl:10.0.162.62:9641 --timeout=15 get logical_switch_port k8s-ip-10-0-153-96.ec2.internal dynamic_addresses' failed: exit status 1" This is normal, at least for a while. See https://bugzilla.redhat.com/show_bug.cgi?id=1779464 This occurs while it's waiting for some Node initialization by ovnkube-master. Checking the ovnkube-master logs, these are the only hints I could find: time="2019-12-11T02:39:35Z" level=info msg="Allocated node ip-10-0-153-96.ec2.internal HostSubnet 10.128.3.0/26" time="2019-12-11T02:39:35Z" level=info msg="Setting annotations ovn_host_subnet=10.128.3.0/26 on node ip-10-0-153-96.ec2.internal" ... time="2019-12-11T02:39:35Z" level=error msg="macAddress annotation not found for node \"ip-10-0-153-96.ec2.internal\" " ... time="2019-12-11T02:39:39Z" level=error msg="error update Node Management Port for node ip-10-0-153-96.ec2.internal: Error in obtaining host subnet for node \"ip-10-0-153-96.ec2.internal\" for deletion" It looks like there was some failure in setting up this Node and then it never tried again. Version-Release number of selected component (if applicable): 4.3.0-0.nightly-2019-12-10-120829
Alexander, can you look at this? I suspect this has been fixed with some of Dan's patches (that haven't merged yet), but I'm not 100% sure.
Unable to reproduce on 4.4.0-0.nightly-2020-01-20-103903 rebooted a master and then node, unable to find similar messages in the ovnkube-master and ovnkube-node logs
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:0581