Bug 1745384
Summary: | [IPv6 Static] Engine should allow updating network's static ipv6gateway | ||||||
---|---|---|---|---|---|---|---|
Product: | Red Hat Enterprise Virtualization Manager | Reporter: | Germano Veit Michel <gveitmic> | ||||
Component: | ovirt-engine | Assignee: | Dominik Holler <dholler> | ||||
Status: | CLOSED ERRATA | QA Contact: | Roni <reliezer> | ||||
Severity: | medium | Docs Contact: | Rolfe Dlugy-Hegwer <rdlugyhe> | ||||
Priority: | high | ||||||
Version: | 4.3.5 | CC: | dholler, eraviv, mburman, mtessun, rdlugyhe | ||||
Target Milestone: | ovirt-4.4.0 | Keywords: | ZStream | ||||
Target Release: | --- | ||||||
Hardware: | x86_64 | ||||||
OS: | Linux | ||||||
Whiteboard: | |||||||
Fixed In Version: | Doc Type: | Bug Fix | |||||
Doc Text: |
Previously, trying to update the IPv6 gateway in the Setup Networks dialog removed it from the network attachment. The current release fixes this issue: You can update the IPv6 gateway if the related network has the default route role.
|
Story Points: | --- | ||||
Clone Of: | |||||||
: | 1759461 (view as bug list) | Environment: | |||||
Last Closed: | 2020-08-04 13:20:09 UTC | Type: | Bug | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | Network | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Bug Depends On: | |||||||
Bug Blocks: | 1759461 | ||||||
Attachments: |
|
Description
Germano Veit Michel
2019-08-26 01:30:05 UTC
Just to be clearer, this happens when configuring IPv6 default route of a host via Setup networks. When setting it in the Setup Networks dialog and clicking OK, the engine seems to always remove the address and send a null to the host. This is not related to moving the default route network. Hi Germano, According to your 'steps to reproduce' you changed the ipv6 method to static from engine side at some point. This might still have left the ipv4 on dynamic protocol (dual stack is not supported) and might have not really made the ipv6 static if for example this is a virtual host which is getting router advertisments from libvirt (which would remove your static gateway). so can you please document: * the state of the corresponding nic on the host prior to attaching the host to engine: it should be configured static for *both* ipv4 and ipv6 * the state of the ovirtmgmt network before the change as viewed from web-admin * whether ovirtmgmt had dynamic ipv4 attachment configuration prior to setting up ipv6 and after attaching the host? * detailed steps to reproduce and the exact inputs to setup networks * the final state of ovirtmgmt as viewed from web-admin * was there another nic on the host with dynamic ipv6\a default ipv6 gateway to begin with. Adding another gateway on ovirtmgmt would cause vdsm to perceive multiple ipv6 gateways on the host which would cause vdsm to return no gateway ("::") to engine. Thanks Hi Eitan, OK, I understand my previous reproducer was a "not suported shortcut" to reproduce the problem. So I've reproduced it again under more correct conditions and captured more details for you. I've done 3 different tests, 2 failed and one succeed. Tests 1 and 3 represents what our customer is doing, both are failing. The customer also indicates this was working fine on 4.1 and 4.2, and broke when they upgraded to 4.3. TEST 1: adding static IPv6 to a host with already configured static IPv4, host already added to engine 1. Prior to configuring Static IPv6, Host status is a single IPv4 static ovirtmgmt network # vdsm-client Host getNetworkCapabilities { "bridges": { "ovirtmgmt": { "ipv6autoconf": true, "addr": "192.168.150.2", "dhcpv6": false, "ipv6addrs": [], "mtu": "1500", "dhcpv4": false, "netmask": "255.255.255.0", "ipv4defaultroute": true, "stp": "off", "ipv4addrs": [ "192.168.150.2/24" ], "ipv6gateway": "::", "gateway": "192.168.150.254", "opts": { "multicast_last_member_count": "2", "vlan_protocol": "0x8100", "hash_elasticity": "4", "multicast_query_response_interval": "1000", "group_fwd_mask": "0x0", "multicast_snooping": "1", "multicast_startup_query_interval": "3125", "hello_timer": "0", "multicast_querier_interval": "25500", "max_age": "2000", "hash_max": "512", "stp_state": "0", "topology_change_detected": "0", "priority": "32768", "multicast_igmp_version": "2", "multicast_membership_interval": "26000", "root_path_cost": "0", "root_port": "0", "multicast_stats_enabled": "0", "multicast_startup_query_count": "2", "nf_call_iptables": "0", "vlan_stats_enabled": "0", "hello_time": "200", "topology_change": "0", "bridge_id": "8000.52540019c102", "topology_change_timer": "0", "ageing_time": "30000", "nf_call_ip6tables": "0", "multicast_mld_version": "1", "gc_timer": "3391", "root_id": "8000.52540019c102", "nf_call_arptables": "0", "group_addr": "1:80:c2:0:0:0", "multicast_last_member_interval": "100", "default_pvid": "1", "multicast_query_interval": "12500", "multicast_query_use_ifaddr": "0", "tcn_timer": "0", "multicast_router": "1", "vlan_filtering": "0", "multicast_querier": "0", "forward_delay": "0" }, "ports": [ "eth0" ] } }, "bondings": {}, "nameservers": [ "192.168.150.254" ], "nics": { "eth0": { "ipv6autoconf": false, "addr": "", "speed": 0, "dhcpv6": false, "ipv6addrs": [], "mtu": "1500", "dhcpv4": false, "netmask": "", "ipv4defaultroute": false, "ipv4addrs": [], "hwaddr": "52:54:00:19:c1:02", "ipv6gateway": "::", "gateway": "" } }, "supportsIPv6": true, "netConfigDirty": "False", "vlans": {}, "networks": { "ovirtmgmt": { "iface": "ovirtmgmt", "ipv6autoconf": true, "addr": "192.168.150.2", "dhcpv6": false, "ipv6addrs": [], "switch": "legacy", "bridged": true, "southbound": "eth0", "dhcpv4": false, "netmask": "255.255.255.0", "ipv4defaultroute": true, "stp": "off", "ipv4addrs": [ "192.168.150.2/24" ], "mtu": "1500", "ipv6gateway": "::", "gateway": "192.168.150.254", "ports": [ "eth0" ] } } } 2. Host -> Setup Networks IPv6 -> Static Address: 2::2/120 Gateway 2:ff -> Click OK 3. Engine does not send IPv6 Gateway, ipv6Gateway sent to host parameter is null: 2019-08-28 12:06:47,313+10 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-1) [3655e7f6-65b5-4411-a570-9c10199274c9] EVENT_ID: NETWORK_REMOVING_IPV6_GATEWAY_FROM_OLD_DEFAULT_ROUTE_ROLE_ATTACHMENT(10,926), On cluster Default the 'Default Route Role' network is no longer network ovirtmgmt. The IPv6 gateway is being removed from this network. 2019-08-28 12:06:47,315+10 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HostSetupNetworksVDSCommand] (default task-1) [3655e7f6-65b5-4411-a570-9c10199274c9] START, HostSetupNetworksVDSCommand(HostName = host2.kvm, HostSetupNetworksVdsCommandParameters:{hostId='aba2bce2-7f50-4d4e-9b1d-453163f88f16', vds='Host[host2.kvm,aba2bce2-7f50-4d4e-9b1d-453163f88f16]', rollbackOnFailure='true', commitOnSuccess='true', connectivityTimeout='120', networks='[HostNetwork:{defaultRoute='true', bonding='false', networkName='ovirtmgmt', vdsmName='ovirtmgmt', nicName='eth0', vlan='null', vmNetwork='true', stp='false', properties='[]', ipv4BootProtocol='STATIC_IP', ipv4Address='192.168.150.2', ipv4Netmask='255.255.255.0', ipv4Gateway='192.168.150.254', ipv6BootProtocol='STATIC_IP', ipv6Address='2::2', ipv6Prefix='120', ipv6Gateway='null', nameServers='null'}]', removedNetworks='[]', bonds='[]', removedBonds='[]', clusterSwitchType='LEGACY', managementNetworkChanged='true'}), log id: 784df1f8 4. The result is: # vdsm-client Host getNetworkCapabilities { "bridges": { "ovirtmgmt": { "ipv6autoconf": false, "addr": "192.168.150.2", "dhcpv6": false, "ipv6addrs": [ "2::2/120" ], "mtu": "1500", "dhcpv4": false, "netmask": "255.255.255.0", "ipv4defaultroute": true, "stp": "off", "ipv4addrs": [ "192.168.150.2/24" ], "ipv6gateway": "::", "gateway": "192.168.150.254", "opts": { "multicast_last_member_count": "2", "vlan_protocol": "0x8100", "hash_elasticity": "4", "multicast_query_response_interval": "1000", "group_fwd_mask": "0x0", "multicast_snooping": "1", "multicast_startup_query_interval": "3125", "hello_timer": "0", "multicast_querier_interval": "25500", "max_age": "2000", "hash_max": "512", "stp_state": "0", "topology_change_detected": "0", "priority": "32768", "multicast_igmp_version": "2", "multicast_membership_interval": "26000", "root_path_cost": "0", "root_port": "0", "multicast_stats_enabled": "0", "multicast_startup_query_count": "2", "nf_call_iptables": "0", "vlan_stats_enabled": "0", "hello_time": "200", "topology_change": "0", "bridge_id": "8000.52540019c102", "topology_change_timer": "0", "ageing_time": "30000", "nf_call_ip6tables": "0", "multicast_mld_version": "1", "gc_timer": "6187", "root_id": "8000.52540019c102", "nf_call_arptables": "0", "group_addr": "1:80:c2:0:0:0", "multicast_last_member_interval": "100", "default_pvid": "1", "multicast_query_interval": "12500", "multicast_query_use_ifaddr": "0", "tcn_timer": "0", "multicast_router": "1", "vlan_filtering": "0", "multicast_querier": "0", "forward_delay": "0" }, "ports": [ "eth0" ] } }, "bondings": {}, "nameservers": [ "192.168.150.254" ], "nics": { "eth0": { "ipv6autoconf": false, "addr": "", "speed": 0, "dhcpv6": false, "ipv6addrs": [], "mtu": "1500", "dhcpv4": false, "netmask": "", "ipv4defaultroute": false, "ipv4addrs": [], "hwaddr": "52:54:00:19:c1:02", "ipv6gateway": "::", "gateway": "" } }, "supportsIPv6": true, "netConfigDirty": "False", "vlans": {}, "networks": { "ovirtmgmt": { "iface": "ovirtmgmt", "ipv6autoconf": false, "addr": "192.168.150.2", "dhcpv6": false, "ipv6addrs": [ "2::2/120" ], "switch": "legacy", "bridged": true, "southbound": "eth0", "dhcpv4": false, "netmask": "255.255.255.0", "ipv4defaultroute": true, "stp": "off", "ipv4addrs": [ "192.168.150.2/24" ], "mtu": "1500", "ipv6gateway": "::", "gateway": "192.168.150.254", "ports": [ "eth0" ] } } } -------------------------------------------------------------------------------------------- TEST 2: Clean host with only IPv6 Static, no IPv4. 1. No network config at all on the host 2. Add IPv6 address, without default gateway 3. Add to engine 4. Host -> Setup Networks IPv6 -> Static Address: 2::2/120 Gateway 2:ff -> Click OK 5. Works, ipv6Gateway is sent. 2019-08-28 13:38:06,132+10 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HostSetupNetworksVDSCommand] (default task-12) [8e1d1d0e-9cdc-4136-9ab2-58cd72d7a5e3] START, HostSetupNetworksVDSCommand(HostName = host2.kvm, HostSetupNetworksVdsCommandParameters:{hostId='240311f3-21a5-498c-a4f6-4ebfc1af1eb4', vds='Host[host2.kvm,240311f3-21a5-498c-a4f6-4ebfc1af1eb4]', rollbackOnFailure='true', commitOnSuccess='true', connectivityTimeout='120', networks='[HostNetwork:{defaultRoute='true', bonding='false', networkName='ovirtmgmt', vdsmName='ovirtmgmt', nicName='eth0', vlan='null', vmNetwork='true', stp='false', properties='[]', ipv4BootProtocol='NONE', ipv4Address='null', ipv4Netmask='null', ipv4Gateway='null', ipv6BootProtocol='STATIC_IP', ipv6Address='2::2', ipv6Prefix='120', ipv6Gateway='2::ff', nameServers='null'}]', removedNetworks='[]', bonds='[]', removedBonds='[]', clusterSwitchType='LEGACY', managementNetworkChanged='true'}), log id: 783cc446 -------------------------------------------------------------------------------------------- TEST 3: Clean Host with IPv4 static and IPv6 static 1. No network config at all on the host 2. Configure just IPv6+IPv4 Static, IPv6 without Gateway, add to the engine # vdsm-client Host getNetworkCapabilities { "bridges": { "ovirtmgmt": { "ipv6autoconf": false, "addr": "192.168.150.2", "dhcpv6": false, "ipv6addrs": [ "2::2/120" ], "mtu": "1500", "dhcpv4": false, "netmask": "255.255.255.0", "ipv4defaultroute": true, "stp": "off", "ipv4addrs": [ "192.168.150.2/24" ], "ipv6gateway": "::", "gateway": "192.168.150.254", "opts": { "multicast_last_member_count": "2", "vlan_protocol": "0x8100", "hash_elasticity": "4", "multicast_query_response_interval": "1000", "group_fwd_mask": "0x0", "multicast_snooping": "1", "multicast_startup_query_interval": "3125", "hello_timer": "0", "multicast_querier_interval": "25500", "max_age": "2000", "hash_max": "512", "stp_state": "0", "topology_change_detected": "0", "priority": "32768", "multicast_igmp_version": "2", "multicast_membership_interval": "26000", "root_path_cost": "0", "root_port": "0", "multicast_stats_enabled": "0", "multicast_startup_query_count": "2", "nf_call_iptables": "0", "vlan_stats_enabled": "0", "hello_time": "200", "topology_change": "0", "bridge_id": "8000.52540019c102", "topology_change_timer": "0", "ageing_time": "30000", "nf_call_ip6tables": "0", "multicast_mld_version": "1", "gc_timer": "19441", "root_id": "8000.52540019c102", "nf_call_arptables": "0", "group_addr": "1:80:c2:0:0:0", "multicast_last_member_interval": "100", "default_pvid": "1", "multicast_query_interval": "12500", "multicast_query_use_ifaddr": "0", "tcn_timer": "0", "multicast_router": "1", "vlan_filtering": "0", "multicast_querier": "0", "forward_delay": "0" }, "ports": [ "eth0" ] } }, "bondings": {}, "nameservers": [ "192.168.150.254" ], "nics": { "eth0": { "ipv6autoconf": false, "addr": "", "speed": 0, "dhcpv6": false, "ipv6addrs": [], "mtu": "1500", "dhcpv4": false, "netmask": "", "ipv4defaultroute": false, "ipv4addrs": [], "hwaddr": "52:54:00:19:c1:02", "ipv6gateway": "::", "gateway": "" } }, "supportsIPv6": true, "netConfigDirty": "False", "vlans": {}, "networks": { "ovirtmgmt": { "iface": "ovirtmgmt", "ipv6autoconf": false, "addr": "192.168.150.2", "dhcpv6": false, "ipv6addrs": [ "2::2/120" ], "switch": "legacy", "bridged": true, "southbound": "eth0", "dhcpv4": false, "netmask": "255.255.255.0", "ipv4defaultroute": true, "stp": "off", "ipv4addrs": [ "192.168.150.2/24" ], "mtu": "1500", "ipv6gateway": "::", "gateway": "192.168.150.254", "ports": [ "eth0" ] } } } 3. Host -> Setup Networks IPv6 -> Static Address: 2::2/120 Gateway 2:ff -> Click OK 4. IPv6 Default Route is not pushed. 2019-08-28 13:48:33,685+10 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-14) [9cea2a1c-64b6-4877-954f-1a0869add1da] EVENT_ID: NETWORK_REMOVING_IPV6_GATEWAY_FROM_OLD_DEFAULT_ROUTE_ROLE_ATTACHMENT(10,926), On cluster Default the 'Default Route Role' network is no longer network ovirtmgmt. The IPv6 gateway is being removed from this network. 2019-08-28 13:48:33,687+10 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HostSetupNetworksVDSCommand] (default task-14) [9cea2a1c-64b6-4877-954f-1a0869add1da] START, HostSetupNetworksVDSCommand(HostName = host2.kvm, HostSetupNetworksVdsCommandParameters:{hostId='c0bc651e-be6d-4fa1-8037-f3eb7fb86380', vds='Host[host2.kvm,c0bc651e-be6d-4fa1-8037-f3eb7fb86380]', rollbackOnFailure='true', commitOnSuccess='true', connectivityTimeout='120', networks='[HostNetwork:{defaultRoute='true', bonding='false', networkName='ovirtmgmt', vdsmName='ovirtmgmt', nicName='eth0', vlan='null', vmNetwork='true', stp='false', properties='[]', ipv4BootProtocol='STATIC_IP', ipv4Address='192.168.150.2', ipv4Netmask='255.255.255.0', ipv4Gateway='192.168.150.254', ipv6BootProtocol='STATIC_IP', ipv6Address='2::2', ipv6Prefix='120', ipv6Gateway='null', nameServers='null'}]', removedNetworks='[]', bonds='[]', removedBonds='[]', clusterSwitchType='LEGACY', managementNetworkChanged='true'}), log id: 4a7db19f -------------------------------------------------------------------------------------------- In addition: (In reply to eraviv from comment #2) > According to your 'steps to reproduce' you changed the ipv6 method to static > from engine side at some point. This might still have left the ipv4 on > dynamic protocol (dual stack is not supported) and might have not really > made the ipv6 static if for example this is a virtual host which is getting > router advertisments from libvirt (which would remove your static gateway). Yes, the initial reproducer is "not supported". You are right, that was an unfortunate shortcut I took to reproduce the problem. Still, the problem reproduced in tests 1 and 3 above, which matches our customer findings. In my setup there is no radvd or anything else doing RA's adversiting a network prefix for autoconf or a gateway. Also "dual stack is not supported" is still not clear to me, its ambiguous. The documentation states the same "dual stack is not supported" [1] and has a note that implies that this means one cannot mix IPv4 and IPv6 hosts in the same cluster, which is fine. ~~~ Set all hosts in a cluster to use the same IP stack for their management network; either IPv4 or IPv6 only. Dual stack is not supported. ~~~ However, in networking usually dual stack means a host having both IPv4 and IPv6 addresses configured at the same time, not related to RHV clusters. My undestanding is one can have all hosts with both IPv4 and IPv6 configured, however, they need to be all added to the engine using the IPv4 OR the IPv6 address, without mixing (I understanding this can break other things migrations etc...). Is this correct? We need to clarify this in the Docs, I think, if: * Its not supported to have dual stack as in "IPv6 and IPv4 addresses configured on hosts at the same time" - which would be very weird in 2019 * Its not supported to have dual stack as in "all hosts in the same cluster need to be added to the engine using IPv6 OR IPv4 addresses, not mixing" - this is understandable > * the state of the corresponding nic on the host prior to attaching the host to engine: it should be configured static for *both* ipv4 and ipv6 > * the state of the ovirtmgmt network before the change as viewed from web-admin > * whether ovirtmgmt had dynamic ipv4 attachment configuration prior to setting up ipv6 and after attaching the host? > * detailed steps to reproduce and the exact inputs to setup networks The above should answer these questions. > * the final state of ovirtmgmt as viewed from web-admin The IPv6 gateway vanished from the SetupNetworks window, the network is in sync. > * was there another nic on the host with dynamic ipv6\a default ipv6 gateway to begin with. Adding another gateway on ovirtmgmt would cause vdsm to perceive multiple ipv6 gateways on the host which would cause vdsm to return no gateway ("::") to engine. No, single network, single nic. [1] https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.3/html/administration_guide/sect-hosts_and_networking#IPv6-networking-support-labels Sorry for the lack of details in the BZ description, I thought this was simpler. I hope the above helps, let me know if you need more detailed info. Thanks sync2jira sync2jira Created attachment 1609105 [details]
reproduction without dynamic and IPv4
Germano, thank you very much for reporting this bug in such an amazing way!
I can reproduce the bug by
1. Adding the host to the engine with static IPv6 and DHCP IPv4
/etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE="eth0"
BOOTPROTO="dhcp"
ONBOOT="yes"
TYPE="Ethernet"
USERCTL="yes"
PEERDNS="yes"
PERSISTENT_DHCLIENT="1"
IPV6INIT=yes
IPV6ADDR=fc01::2/64
IPV6_AUTOCONF=no
NM_CONTROLLED=no
see 2018-08-28-ovirt-43-host9.log for details.
This triggers
HostSetupNetworksVDSCommand(HostName = ovirt-43-host9, HostSetupNetworksVdsCommandParameters:{hostId='b2bc5203-011c-4ab6-b4f8-dcc16eb06dea', vds='Host[ovirt-43-host9,b2bc5203-011c-4ab6-b4f8-dcc16eb06dea]', rollbackOnFailure='true', commitOnSuccess='false', connectivityTimeout='120', networks='[HostNetwork:{defaultRoute='true', bonding='false', networkName='ovirtmgmt', vdsmName='ovirtmgmt', nicName='eth0', vlan='null', vmNetwork='true', stp='false', properties='null', ipv4BootProtocol='DHCP', ipv4Address='null', ipv4Netmask='null', ipv4Gateway='null', ipv6BootProtocol='STATIC_IP', ipv6Address='fc01::2', ipv6Prefix='64', ipv6Gateway='::', nameServers='null'}]', removedNetworks='[]', bonds='[]', removedBonds='[]', clusterSwitchType='LEGACY', managementNetworkChanged='true'})
and results in 0.png
This works like expected.
2. Removing IPv4 via Engine triggers
HostSetupNetworksVDSCommand(HostName = ovirt-43-host9, HostSetupNetworksVdsCommandParameters:{hostId='b2bc5203-011c-4ab6-b4f8-dcc16eb06dea', vds='Host[ovirt-43-host9,b2bc5203-011c-4ab6-b4f8-dcc16eb06dea]', rollbackOnFailure='true', commitOnSuccess='true', connectivityTimeout='120', networks='[HostNetwork:{defaultRoute='true', bonding='false', networkName='ovirtmgmt', vdsmName='ovirtmgmt', nicName='eth0', vlan='null', vmNetwork='true', stp='false', properties='[]', ipv4BootProtocol='NONE', ipv4Address='null', ipv4Netmask='null', ipv4Gateway='null', ipv6BootProtocol='STATIC_IP', ipv6Address='fc01::2', ipv6Prefix='64', ipv6Gateway='null', nameServers='null'}]', removedNetworks='[]', bonds='[]', removedBonds='[]', clusterSwitchType='LEGACY', managementNetworkChanged='true'})
and results in 1.png.
Like expected, the host has now no default gw anymore.
3. Adding fc01::3 as gateway triggers
HostSetupNetworksVDSCommand(HostName = ovirt-43-host9, HostSetupNetworksVdsCommandParameters:{hostId='b2bc5203-011c-4ab6-b4f8-dcc16eb06dea', vds='Host[ovirt-43-host9,b2bc5203-011c-4ab6-b4f8-dcc16eb06dea]', rollbackOnFailure='true', commitOnSuccess='true', connectivityTimeout='120', networks='[HostNetwork:{defaultRoute='true', bonding='false', networkName='ovirtmgmt', vdsmName='ovirtmgmt', nicName='eth0', vlan='null', vmNetwork='true', stp='false', properties='[]', ipv4BootProtocol='NONE', ipv4Address='null', ipv4Netmask='null', ipv4Gateway='null', ipv6BootProtocol='STATIC_IP', ipv6Address='fc01::2', ipv6Prefix='64', ipv6Gateway='fc01::3', nameServers='null'}]', removedNetworks='[]', bonds='[]', removedBonds='[]', clusterSwitchType='LEGACY', managementNetworkChanged='true'}),
results in 2.png.
This works like expected and shows, that it is possible to set an IPv6 gateway, if no default gateway (neither IPv4 nor IPv6) was set before.
Please note that the host is static IPv6 and neither IPv4 nor dynamic configuration is used anymore.
4. Changing the gateway to fc01::4 triggers
2019-08-28 18:07:18,240+02 INFO [org.ovirt.engine.core.bll.network.host.HostSetupNetworksCommand] (default task-23) [cc70db93-7992-4f29-8203-ddd0a44258c5] Lock acquired, from now a monitoring of host will be skipped for host 'ovirt-43-host9' from data-center 'ipv6_dc'
2019-08-28 18:07:18,244+02 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-23) [cc70db93-7992-4f29-8203-ddd0a44258c5] EVENT_ID: NETWORK_REMOVING_IPV6_GATEWAY_FROM_OLD_DEFAULT_ROUTE_ROLE_ATTACHMENT(10,926), On cluster ipv6_cluster the 'Default Route Role' network is no longer network ovirtmgmt. The IPv6 gateway is being removed from this network.
2019-08-28 18:07:18,245+02 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HostSetupNetworksVDSCommand] (default task-23) [cc70db93-7992-4f29-8203-ddd0a44258c5] START, HostSetupNetworksVDSCommand(HostName = ovirt-43-host9, HostSetupNetworksVdsCommandParameters:{hostId='b2bc5203-011c-4ab6-b4f8-dcc16eb06dea', vds='Host[ovirt-43-host9,b2bc5203-011c-4ab6-b4f8-dcc16eb06dea]', rollbackOnFailure='true', commitOnSuccess='true', connectivityTimeout='120', networks='[HostNetwork:{defaultRoute='true', bonding='false', networkName='ovirtmgmt', vdsmName='ovirtmgmt', nicName='eth0', vlan='null', vmNetwork='true', stp='false', properties='[]', ipv4BootProtocol='NONE', ipv4Address='null', ipv4Netmask='null', ipv4Gateway='null', ipv6BootProtocol='STATIC_IP', ipv6Address='fc01::2', ipv6Prefix='64', ipv6Gateway='null', nameServers='null'}]', removedNetworks='[]', bonds='[]', removedBonds='[]', clusterSwitchType='LEGACY', managementNetworkChanged='true'}), log id: 77984674
and results in 3.png.
This shows that updating the IPv6 gateway does not work on Engine.
Eitan and Michael, do you still have doubts that this bug could be related to dynamic IP configuration or dual-stack?
Verified on RHV v4.4.0-0.6.master.el7 non-hosted-engine environment rhvh-4.4.0.10-0.20191204.0+1 vdsm-4.40.0-154.git4e13ea9.el8ev.x86_64 ovirt-engine-4.4.0-0.6.master.el7.noarch WARN: Bug status (VERIFIED) wasn't changed but the folowing should be fixed: [Found non-acked flags: '{}', ] For more info please contact: rhv-devops: Bug status (VERIFIED) wasn't changed but the folowing should be fixed: [Found non-acked flags: '{}', ] For more info please contact: rhv-devops WARN: Bug status (VERIFIED) wasn't changed but the folowing should be fixed: [Found non-acked flags: '{}', ] For more info please contact: rhv-devops: Bug status (VERIFIED) wasn't changed but the folowing should be fixed: [Found non-acked flags: '{}', ] For more info please contact: rhv-devops WARN: Bug status (VERIFIED) wasn't changed but the folowing should be fixed: [Found non-acked flags: '{}', ] For more info please contact: rhv-devops: Bug status (VERIFIED) wasn't changed but the folowing should be fixed: [Found non-acked flags: '{}', ] For more info please contact: rhv-devops WARN: Bug status (VERIFIED) wasn't changed but the folowing should be fixed: [Found non-acked flags: '{}', ] For more info please contact: rhv-devops: Bug status (VERIFIED) wasn't changed but the folowing should be fixed: [Found non-acked flags: '{}', ] For more info please contact: rhv-devops WARN: Bug status (VERIFIED) wasn't changed but the folowing should be fixed: [Found non-acked flags: '{}', ] For more info please contact: rhv-devops: Bug status (VERIFIED) wasn't changed but the folowing should be fixed: [Found non-acked flags: '{}', ] For more info please contact: rhv-devops Hi Dominic, please review the updated contents of the Doc Text field and requires_doc_text flag. (In reply to Rolfe Dlugy-Hegwer from comment #27) > Hi Dominic, please review the updated contents of the Doc Text field and > requires_doc_text flag. Doc Text is fine, thanks. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: RHV Manager (ovirt-engine) 4.4 security, bug fix, and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2020:3247 |