Description of problem: Make network with 0.0.0.0 gateway in sync With new nmstate nmstate-1.0.2-13.el8_4.noarch and above all networks with gateway that is out side of subnet's range is in sync on engine side, but gateway 0.0.0.0 is reported as out-of-sync. While in fact change applied on the host with success and pass nmstate verification as expected. I would like to check how expensive and risky it will be to fix the 0.0.0.0 gateway as well. Here are some examples of how it behaves: 1. network with static IPv4 5.5.5.212, subnet 24, gateway 1.1.1.1 - result: network is in sync, changes applied, expected engine warning about gateway is out of subnet's range. routes: config: - destination: 0.0.0.0/0 metric: 426 next-hop-address: 1.1.1.1 next-hop-interface: nettest1 "IPv4 gateway 1.1.1.1 is out of the subnet range defined by the IP address and netmask specified on ens4f1 in host 'hostname' of cluster Cluster1" 2. network with static IPv4 5.5.5.212, subnet 24, gateway 2.2.2.254 - result: network in sync, changes applied on the host, expected engine warning about gateway is out side of subnet's range. routes: config: - destination: 0.0.0.0/0 metric: 426 next-hop-address: 2.2.2.254 next-hop-interface: nettest1 "IPv4 gateway 2.2.2.254 is out of the subnet range defined by the IP address and netmask specified on ens4f1 in host 'hostname' of cluster Cluster1" 3. network with static IPv4 5.5.5.212, subnet 24, gateway 0.0.0.0 - result: network is out-of-sync, changes applied on the host, expected engine warning about gateway is out side of subnet's range. routes: config: - destination: 0.0.0.0/0 metric: 426 next-hop-address: 0.0.0.0 next-hop-interface: nettest1 "IPv4 gateway 0.0.0.0 is out of the subnet range defined by the IP address and netmask specified on ens4f1 in host 'hostname' of cluster Cluster1" "Host <hostname>'s following network(s) are not synchronized with their Logical Network configuration: nettest1. Version-Release number of selected component (if applicable): rhvm-4.4.8.3-0.10.el8ev.noarch How reproducible: 100%
Verified on - rhvm-4.5.0-0.237.el8ev.noarch with vdsm-4.50.0.10-1.el8ev.x86_64
This bugzilla is included in oVirt 4.5.0 release, published on April 20th 2022. Since the problem described in this bug report should be resolved in oVirt 4.5.0 release, it has been closed with a resolution of CURRENT RELEASE. If the solution does not work for you, please open a new bug report.