Bug 1745384 - [IPv6 Static] Engine should allow updating network's static ipv6gateway
Summary: [IPv6 Static] Engine should allow updating network's static ipv6gateway
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine
Version: 4.3.5
Hardware: x86_64
OS: Linux
high
medium
Target Milestone: ovirt-4.4.0
: ---
Assignee: Dominik Holler
QA Contact: Roni
Rolfe Dlugy-Hegwer
URL:
Whiteboard:
Depends On:
Blocks: 1759461
TreeView+ depends on / blocked
 
Reported: 2019-08-26 01:30 UTC by Germano Veit Michel
Modified: 2023-03-24 15:19 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Previously, trying to update the IPv6 gateway in the Setup Networks dialog removed it from the network attachment. The current release fixes this issue: You can update the IPv6 gateway if the related network has the default route role.
Clone Of:
: 1759461 (view as bug list)
Environment:
Last Closed: 2020-08-04 13:20:09 UTC
oVirt Team: Network
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
reproduction without dynamic and IPv4 (8.68 MB, application/x-xz)
2019-08-28 18:33 UTC, Dominik Holler
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 4370841 0 None None None 2019-08-26 01:55:03 UTC
Red Hat Product Errata RHSA-2020:3247 0 None None None 2020-08-04 13:20:39 UTC
oVirt gerrit 102891 0 'None' MERGED core: Fix IPv6 gateway update 2020-12-21 06:39:49 UTC

Internal Links: 1809540

Description Germano Veit Michel 2019-08-26 01:30:05 UTC
Description of problem:

The engine seems to always filter out ipv6Gateway parameter incorrectly from HostSetupNetworksVDSCommand, sending 'null' instead of the actual gateway. So the host is missing IPv6 default route. 

After looking at the code [1] I thought of setting the IPv6 to 'None' and then to 'Static', it did not work as well, same problem. It seems like 'hasIpv6StaticBootProto(previousDefaultRouteAttachment)' is always true.

2019-08-26 11:16:08,054+10 WARN  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-36) [cbb7a7fa-ab46-44c0-999f-0f390a0ed7e1] EVENT_ID: NETWORK_REMOVING_IPV6_GATEWAY_FROM_OLD_DEFAULT_ROUTE_ROLE_ATTACHMENT(10,926), On cluster Default the 'Default Route Role' network is no longer network ovirtmgmt. The IPv6 gateway is being removed from this network.

2019-08-26 11:16:08,057+10 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HostSetupNetworksVDSCommand] (default task-36) [cbb7a7fa-ab46-44c0-999f-0f390a0ed7e1] START, HostSetupNetworksVDSCommand(HostName = rhel-h2, HostSetupNetworksVdsCommandParameters:{hostId='ee5f0ee7-8c2c-4fc8-8b06-50e08242436b', vds='Host[rhel-h2,ee5f0ee7-8c2c-4fc8-8b06-50e08242436b]', rollbackOnFailure='true', commitOnSuccess='true', connectivityTimeout='120', networks='[HostNetwork:{defaultRoute='true', bonding='false', networkName='ovirtmgmt', vdsmName='ovirtmgmt', nicName='eth0', vlan='null', vmNetwork='true', stp='false', properties='[]', ipv4BootProtocol='DHCP', ipv4Address='null', ipv4Netmask='null', ipv4Gateway='null', ipv6BootProtocol='STATIC_IP', ipv6Address='2::1', ipv6Prefix='64', ipv6Gateway='null', nameServers='null'}]', removedNetworks='[]', bonds='[]', removedBonds='[]', clusterSwitchType='LEGACY', managementNetworkChanged='true'}), log id: 8740382
2019-08-26 11:16:08,059+10 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HostSetupNetworksVDSCommand] (default task-36) [cbb7a7fa-ab46-44c0-999f-0f390a0ed7e1] FINISH, HostSetupNetworksVDSCommand, return: , log id: 8740382

I'm also confused on why the engine is doing this on the scenario of configuring IPv6 default route on the host for the first time, it seems to be a side effect of this change:

    core: fix ipv6 gw removal on default route role move
    
    1. When the default route role is moved away from a network attachment
       it might have an empty ipv6 primary address if for example it has
       been configured with ipv4 only after having had an ipv6
       configuration. This triggers an ipv6 gateway removal and
       notification also for ipv4 only attachments.
       Therefore remove the ipv6 gateway only if it has a static boot
       protocol configured. Dynamic boot protocol is not checked for
       because:
       - it is not supported
       - a gateway will be immediately reassigned anyway by the dhcp server or
         or by autoconf, making the removal and notification false.
    2. The removal and notification were called twice - fix to a single
       call.
    
    Change-Id: Idc5501fb4375be2b32f132c8fed362bace757636
    Bug-Url: https://bugzilla.redhat.com/1685818

Version-Release number of selected component (if applicable):
rhvm-4.3.5.4-0.1.el7.noarch

How reproducible:
Always

Steps to Reproduce:
1. Edit ovirtmgmt networks (Cluster Default Route and Management network)
2. Set IPv6 to Static
3. Fill Address, prefix length and default route.
4. Click OK

Actual results:
IPv6 Default Route not configured

Expected results:
IPv6 Default Route configured

Additional info:
[1] https://github.com/oVirt/ovirt-engine/blob/549861b0a8b31c73262d130462384ef8ae14805c/backend/manager/modules/bll/src/main/java/org/ovirt/engine/core/bll/network/host/HostSetupNetworksCommand.java#L683

https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.3/html-single/administration_guide/index#Configuring_a_Default_Route

Comment 1 Germano Veit Michel 2019-08-26 01:33:15 UTC
Just to be clearer, this happens when configuring IPv6 default route of a host via Setup networks. When setting it in the Setup Networks dialog and clicking OK, the engine seems to always remove the address and send a null to the host. This is not related to moving the default route network.

Comment 2 eraviv 2019-08-27 07:54:59 UTC
Hi Germano,

According to your 'steps to reproduce' you changed the ipv6 method to static from engine side at some point. This might still have left the ipv4 on dynamic protocol (dual stack is not supported) and might have not really made the ipv6 static if for example this is a virtual host which is getting router advertisments from libvirt (which would remove your static gateway).

so can you please document:

* the state of the corresponding nic on the host prior to attaching the host to engine: it should be configured static for *both* ipv4 and ipv6
* the state of the ovirtmgmt network before the change as viewed from web-admin
* whether ovirtmgmt had dynamic ipv4 attachment configuration prior to setting up ipv6 and after attaching the host?
* detailed steps to reproduce and the exact inputs to setup networks
* the final state of ovirtmgmt as viewed from web-admin
* was there another nic on the host with dynamic ipv6\a default ipv6 gateway to begin with. Adding another gateway on ovirtmgmt would cause vdsm to perceive multiple ipv6 gateways on the host which would cause vdsm to return no gateway ("::") to engine.

Thanks

Comment 3 Germano Veit Michel 2019-08-28 04:07:12 UTC
Hi Eitan,

OK, I understand my previous reproducer was a "not suported shortcut" to reproduce the problem. So I've reproduced it again under more correct conditions and captured more details for you.
I've done 3 different tests, 2 failed and one succeed. Tests 1 and 3 represents what our customer is doing, both are failing.
The customer also indicates this was working fine on 4.1 and 4.2, and broke when they upgraded to 4.3.

TEST 1: adding static IPv6 to a host with already configured static IPv4, host already added to engine

1. Prior to configuring Static IPv6, Host status is a single IPv4 static ovirtmgmt network

# vdsm-client Host getNetworkCapabilities
{
    "bridges": {
        "ovirtmgmt": {
            "ipv6autoconf": true, 
            "addr": "192.168.150.2", 
            "dhcpv6": false, 
            "ipv6addrs": [], 
            "mtu": "1500", 
            "dhcpv4": false, 
            "netmask": "255.255.255.0", 
            "ipv4defaultroute": true, 
            "stp": "off", 
            "ipv4addrs": [
                "192.168.150.2/24"
            ], 
            "ipv6gateway": "::", 
            "gateway": "192.168.150.254", 
            "opts": {
                "multicast_last_member_count": "2", 
                "vlan_protocol": "0x8100", 
                "hash_elasticity": "4", 
                "multicast_query_response_interval": "1000", 
                "group_fwd_mask": "0x0", 
                "multicast_snooping": "1", 
                "multicast_startup_query_interval": "3125", 
                "hello_timer": "0", 
                "multicast_querier_interval": "25500", 
                "max_age": "2000", 
                "hash_max": "512", 
                "stp_state": "0", 
                "topology_change_detected": "0", 
                "priority": "32768", 
                "multicast_igmp_version": "2", 
                "multicast_membership_interval": "26000", 
                "root_path_cost": "0", 
                "root_port": "0", 
                "multicast_stats_enabled": "0", 
                "multicast_startup_query_count": "2", 
                "nf_call_iptables": "0", 
                "vlan_stats_enabled": "0", 
                "hello_time": "200", 
                "topology_change": "0", 
                "bridge_id": "8000.52540019c102", 
                "topology_change_timer": "0", 
                "ageing_time": "30000", 
                "nf_call_ip6tables": "0", 
                "multicast_mld_version": "1", 
                "gc_timer": "3391", 
                "root_id": "8000.52540019c102", 
                "nf_call_arptables": "0", 
                "group_addr": "1:80:c2:0:0:0", 
                "multicast_last_member_interval": "100", 
                "default_pvid": "1", 
                "multicast_query_interval": "12500", 
                "multicast_query_use_ifaddr": "0", 
                "tcn_timer": "0", 
                "multicast_router": "1", 
                "vlan_filtering": "0", 
                "multicast_querier": "0", 
                "forward_delay": "0"
            }, 
            "ports": [
                "eth0"
            ]
        }
    }, 
    "bondings": {}, 
    "nameservers": [
        "192.168.150.254"
    ], 
    "nics": {
        "eth0": {
            "ipv6autoconf": false, 
            "addr": "", 
            "speed": 0, 
            "dhcpv6": false, 
            "ipv6addrs": [], 
            "mtu": "1500", 
            "dhcpv4": false, 
            "netmask": "", 
            "ipv4defaultroute": false, 
            "ipv4addrs": [], 
            "hwaddr": "52:54:00:19:c1:02", 
            "ipv6gateway": "::", 
            "gateway": ""
        }
    }, 
    "supportsIPv6": true, 
    "netConfigDirty": "False", 
    "vlans": {}, 
    "networks": {
        "ovirtmgmt": {
            "iface": "ovirtmgmt", 
            "ipv6autoconf": true, 
            "addr": "192.168.150.2", 
            "dhcpv6": false, 
            "ipv6addrs": [], 
            "switch": "legacy", 
            "bridged": true, 
            "southbound": "eth0", 
            "dhcpv4": false, 
            "netmask": "255.255.255.0", 
            "ipv4defaultroute": true, 
            "stp": "off", 
            "ipv4addrs": [
                "192.168.150.2/24"
            ], 
            "mtu": "1500", 
            "ipv6gateway": "::", 
            "gateway": "192.168.150.254", 
            "ports": [
                "eth0"
            ]
        }
    }
}

2. Host -> Setup Networks
   IPv6 -> Static
   Address: 2::2/120
   Gateway 2:ff
   -> Click OK

3. Engine does not send IPv6 Gateway, ipv6Gateway sent to host parameter is null:

2019-08-28 12:06:47,313+10 WARN  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-1) [3655e7f6-65b5-4411-a570-9c10199274c9] EVENT_ID: NETWORK_REMOVING_IPV6_GATEWAY_FROM_OLD_DEFAULT_ROUTE_ROLE_ATTACHMENT(10,926), On cluster Default the 'Default Route Role' network is no longer network ovirtmgmt. The IPv6 gateway is being removed from this network.

2019-08-28 12:06:47,315+10 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HostSetupNetworksVDSCommand] (default task-1) [3655e7f6-65b5-4411-a570-9c10199274c9] START, HostSetupNetworksVDSCommand(HostName = host2.kvm, HostSetupNetworksVdsCommandParameters:{hostId='aba2bce2-7f50-4d4e-9b1d-453163f88f16', vds='Host[host2.kvm,aba2bce2-7f50-4d4e-9b1d-453163f88f16]', rollbackOnFailure='true', commitOnSuccess='true', connectivityTimeout='120', networks='[HostNetwork:{defaultRoute='true', bonding='false', networkName='ovirtmgmt', vdsmName='ovirtmgmt', nicName='eth0', vlan='null', vmNetwork='true', stp='false', properties='[]', ipv4BootProtocol='STATIC_IP', ipv4Address='192.168.150.2', ipv4Netmask='255.255.255.0', ipv4Gateway='192.168.150.254', ipv6BootProtocol='STATIC_IP', ipv6Address='2::2', ipv6Prefix='120', ipv6Gateway='null', nameServers='null'}]', removedNetworks='[]', bonds='[]', removedBonds='[]', clusterSwitchType='LEGACY', managementNetworkChanged='true'}), log id: 784df1f8

4. The result is:

# vdsm-client Host getNetworkCapabilities
{
    "bridges": {
        "ovirtmgmt": {
            "ipv6autoconf": false, 
            "addr": "192.168.150.2", 
            "dhcpv6": false, 
            "ipv6addrs": [
                "2::2/120"
            ], 
            "mtu": "1500", 
            "dhcpv4": false, 
            "netmask": "255.255.255.0", 
            "ipv4defaultroute": true, 
            "stp": "off", 
            "ipv4addrs": [
                "192.168.150.2/24"
            ], 
            "ipv6gateway": "::", 
            "gateway": "192.168.150.254", 
            "opts": {
                "multicast_last_member_count": "2", 
                "vlan_protocol": "0x8100", 
                "hash_elasticity": "4", 
                "multicast_query_response_interval": "1000", 
                "group_fwd_mask": "0x0", 
                "multicast_snooping": "1", 
                "multicast_startup_query_interval": "3125", 
                "hello_timer": "0", 
                "multicast_querier_interval": "25500", 
                "max_age": "2000", 
                "hash_max": "512", 
                "stp_state": "0", 
                "topology_change_detected": "0", 
                "priority": "32768", 
                "multicast_igmp_version": "2", 
                "multicast_membership_interval": "26000", 
                "root_path_cost": "0", 
                "root_port": "0", 
                "multicast_stats_enabled": "0", 
                "multicast_startup_query_count": "2", 
                "nf_call_iptables": "0", 
                "vlan_stats_enabled": "0", 
                "hello_time": "200", 
                "topology_change": "0", 
                "bridge_id": "8000.52540019c102", 
                "topology_change_timer": "0", 
                "ageing_time": "30000", 
                "nf_call_ip6tables": "0", 
                "multicast_mld_version": "1", 
                "gc_timer": "6187", 
                "root_id": "8000.52540019c102", 
                "nf_call_arptables": "0", 
                "group_addr": "1:80:c2:0:0:0", 
                "multicast_last_member_interval": "100", 
                "default_pvid": "1", 
                "multicast_query_interval": "12500", 
                "multicast_query_use_ifaddr": "0", 
                "tcn_timer": "0", 
                "multicast_router": "1", 
                "vlan_filtering": "0", 
                "multicast_querier": "0", 
                "forward_delay": "0"
            }, 
            "ports": [
                "eth0"
            ]
        }
    }, 
    "bondings": {}, 
    "nameservers": [
        "192.168.150.254"
    ], 
    "nics": {
        "eth0": {
            "ipv6autoconf": false, 
            "addr": "", 
            "speed": 0, 
            "dhcpv6": false, 
            "ipv6addrs": [], 
            "mtu": "1500", 
            "dhcpv4": false, 
            "netmask": "", 
            "ipv4defaultroute": false, 
            "ipv4addrs": [], 
            "hwaddr": "52:54:00:19:c1:02", 
            "ipv6gateway": "::", 
            "gateway": ""
        }
    }, 
    "supportsIPv6": true, 
    "netConfigDirty": "False", 
    "vlans": {}, 
    "networks": {
        "ovirtmgmt": {
            "iface": "ovirtmgmt", 
            "ipv6autoconf": false, 
            "addr": "192.168.150.2", 
            "dhcpv6": false, 
            "ipv6addrs": [
                "2::2/120"
            ], 
            "switch": "legacy", 
            "bridged": true, 
            "southbound": "eth0", 
            "dhcpv4": false, 
            "netmask": "255.255.255.0", 
            "ipv4defaultroute": true, 
            "stp": "off", 
            "ipv4addrs": [
                "192.168.150.2/24"
            ], 
            "mtu": "1500", 
            "ipv6gateway": "::", 
            "gateway": "192.168.150.254", 
            "ports": [
                "eth0"
            ]
        }
    }
}

--------------------------------------------------------------------------------------------

TEST 2: Clean host with only IPv6 Static, no IPv4.
1. No network config at all on the host
2. Add IPv6 address, without default gateway
3. Add to engine
4. Host -> Setup Networks
   IPv6 -> Static
   Address: 2::2/120
   Gateway 2:ff
   -> Click OK

5. Works, ipv6Gateway is sent.

2019-08-28 13:38:06,132+10 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HostSetupNetworksVDSCommand] (default task-12) [8e1d1d0e-9cdc-4136-9ab2-58cd72d7a5e3] START, HostSetupNetworksVDSCommand(HostName = host2.kvm, HostSetupNetworksVdsCommandParameters:{hostId='240311f3-21a5-498c-a4f6-4ebfc1af1eb4', vds='Host[host2.kvm,240311f3-21a5-498c-a4f6-4ebfc1af1eb4]', rollbackOnFailure='true', commitOnSuccess='true', connectivityTimeout='120', networks='[HostNetwork:{defaultRoute='true', bonding='false', networkName='ovirtmgmt', vdsmName='ovirtmgmt', nicName='eth0', vlan='null', vmNetwork='true', stp='false', properties='[]', ipv4BootProtocol='NONE', ipv4Address='null', ipv4Netmask='null', ipv4Gateway='null', ipv6BootProtocol='STATIC_IP', ipv6Address='2::2', ipv6Prefix='120', ipv6Gateway='2::ff', nameServers='null'}]', removedNetworks='[]', bonds='[]', removedBonds='[]', clusterSwitchType='LEGACY', managementNetworkChanged='true'}), log id: 783cc446

--------------------------------------------------------------------------------------------

TEST 3: Clean Host with IPv4 static and IPv6 static

1. No network config at all on the host
2. Configure just IPv6+IPv4 Static, IPv6 without Gateway, add to the engine

# vdsm-client Host getNetworkCapabilities
{
    "bridges": {
        "ovirtmgmt": {
            "ipv6autoconf": false, 
            "addr": "192.168.150.2", 
            "dhcpv6": false, 
            "ipv6addrs": [
                "2::2/120"
            ], 
            "mtu": "1500", 
            "dhcpv4": false, 
            "netmask": "255.255.255.0", 
            "ipv4defaultroute": true, 
            "stp": "off", 
            "ipv4addrs": [
                "192.168.150.2/24"
            ], 
            "ipv6gateway": "::", 
            "gateway": "192.168.150.254", 
            "opts": {
                "multicast_last_member_count": "2", 
                "vlan_protocol": "0x8100", 
                "hash_elasticity": "4", 
                "multicast_query_response_interval": "1000", 
                "group_fwd_mask": "0x0", 
                "multicast_snooping": "1", 
                "multicast_startup_query_interval": "3125", 
                "hello_timer": "0", 
                "multicast_querier_interval": "25500", 
                "max_age": "2000", 
                "hash_max": "512", 
                "stp_state": "0", 
                "topology_change_detected": "0", 
                "priority": "32768", 
                "multicast_igmp_version": "2", 
                "multicast_membership_interval": "26000", 
                "root_path_cost": "0", 
                "root_port": "0", 
                "multicast_stats_enabled": "0", 
                "multicast_startup_query_count": "2", 
                "nf_call_iptables": "0", 
                "vlan_stats_enabled": "0", 
                "hello_time": "200", 
                "topology_change": "0", 
                "bridge_id": "8000.52540019c102", 
                "topology_change_timer": "0", 
                "ageing_time": "30000", 
                "nf_call_ip6tables": "0", 
                "multicast_mld_version": "1", 
                "gc_timer": "19441", 
                "root_id": "8000.52540019c102", 
                "nf_call_arptables": "0", 
                "group_addr": "1:80:c2:0:0:0", 
                "multicast_last_member_interval": "100", 
                "default_pvid": "1", 
                "multicast_query_interval": "12500", 
                "multicast_query_use_ifaddr": "0", 
                "tcn_timer": "0", 
                "multicast_router": "1", 
                "vlan_filtering": "0", 
                "multicast_querier": "0", 
                "forward_delay": "0"
            }, 
            "ports": [
                "eth0"
            ]
        }
    }, 
    "bondings": {}, 
    "nameservers": [
        "192.168.150.254"
    ], 
    "nics": {
        "eth0": {
            "ipv6autoconf": false, 
            "addr": "", 
            "speed": 0, 
            "dhcpv6": false, 
            "ipv6addrs": [], 
            "mtu": "1500", 
            "dhcpv4": false, 
            "netmask": "", 
            "ipv4defaultroute": false, 
            "ipv4addrs": [], 
            "hwaddr": "52:54:00:19:c1:02", 
            "ipv6gateway": "::", 
            "gateway": ""
        }
    }, 
    "supportsIPv6": true, 
    "netConfigDirty": "False", 
    "vlans": {}, 
    "networks": {
        "ovirtmgmt": {
            "iface": "ovirtmgmt", 
            "ipv6autoconf": false, 
            "addr": "192.168.150.2", 
            "dhcpv6": false, 
            "ipv6addrs": [
                "2::2/120"
            ], 
            "switch": "legacy", 
            "bridged": true, 
            "southbound": "eth0", 
            "dhcpv4": false, 
            "netmask": "255.255.255.0", 
            "ipv4defaultroute": true, 
            "stp": "off", 
            "ipv4addrs": [
                "192.168.150.2/24"
            ], 
            "mtu": "1500", 
            "ipv6gateway": "::", 
            "gateway": "192.168.150.254", 
            "ports": [
                "eth0"
            ]
        }
    }
}

3. Host -> Setup Networks
   IPv6 -> Static
   Address: 2::2/120
   Gateway 2:ff
   -> Click OK

4. IPv6 Default Route is not pushed.

2019-08-28 13:48:33,685+10 WARN  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-14) [9cea2a1c-64b6-4877-954f-1a0869add1da] EVENT_ID: NETWORK_REMOVING_IPV6_GATEWAY_FROM_OLD_DEFAULT_ROUTE_ROLE_ATTACHMENT(10,926), On cluster Default the 'Default Route Role' network is no longer network ovirtmgmt. The IPv6 gateway is being removed from this network.

2019-08-28 13:48:33,687+10 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HostSetupNetworksVDSCommand] (default task-14) [9cea2a1c-64b6-4877-954f-1a0869add1da] START, HostSetupNetworksVDSCommand(HostName = host2.kvm, HostSetupNetworksVdsCommandParameters:{hostId='c0bc651e-be6d-4fa1-8037-f3eb7fb86380', vds='Host[host2.kvm,c0bc651e-be6d-4fa1-8037-f3eb7fb86380]', rollbackOnFailure='true', commitOnSuccess='true', connectivityTimeout='120', networks='[HostNetwork:{defaultRoute='true', bonding='false', networkName='ovirtmgmt', vdsmName='ovirtmgmt', nicName='eth0', vlan='null', vmNetwork='true', stp='false', properties='[]', ipv4BootProtocol='STATIC_IP', ipv4Address='192.168.150.2', ipv4Netmask='255.255.255.0', ipv4Gateway='192.168.150.254', ipv6BootProtocol='STATIC_IP', ipv6Address='2::2', ipv6Prefix='120', ipv6Gateway='null', nameServers='null'}]', removedNetworks='[]', bonds='[]', removedBonds='[]', clusterSwitchType='LEGACY', managementNetworkChanged='true'}), log id: 4a7db19f

--------------------------------------------------------------------------------------------

In addition:

(In reply to eraviv from comment #2)
> According to your 'steps to reproduce' you changed the ipv6 method to static
> from engine side at some point. This might still have left the ipv4 on
> dynamic protocol (dual stack is not supported) and might have not really
> made the ipv6 static if for example this is a virtual host which is getting
> router advertisments from libvirt (which would remove your static gateway).
Yes, the initial reproducer is "not supported". You are right, that was an unfortunate shortcut I took to reproduce the problem. Still, the problem reproduced in tests 1 and 3 above, which matches our customer findings.
In my setup there is no radvd or anything else doing RA's adversiting a network prefix for autoconf or a gateway.

Also "dual stack is not supported" is still not clear to me, its ambiguous. The documentation states the same "dual stack is not supported" [1] and has a note that implies that this means one cannot mix IPv4 and IPv6 hosts in the same cluster, which is fine. 
~~~
Set all hosts in a cluster to use the same IP stack for their management network; either IPv4 or IPv6 only. Dual stack is not supported. 
~~~

However, in networking usually dual stack means a host having both IPv4 and IPv6 addresses configured at the same time, not related to RHV clusters. My undestanding is one can have all hosts with both IPv4 and IPv6 configured, however, they need to be all added to the engine using the IPv4 OR the IPv6 address, without mixing (I understanding this can break other things migrations etc...). Is this correct? We need to clarify this in the Docs, I think, if:
* Its not supported to have dual stack as in "IPv6 and IPv4 addresses configured on hosts at the same time" - which would be very weird in 2019
* Its not supported to have dual stack as in "all hosts in the same cluster need to be added to the engine using IPv6 OR IPv4 addresses, not mixing" - this is understandable

> * the state of the corresponding nic on the host prior to attaching the host to engine: it should be configured static for *both* ipv4 and ipv6
> * the state of the ovirtmgmt network before the change as viewed from web-admin
> * whether ovirtmgmt had dynamic ipv4 attachment configuration prior to setting up ipv6 and after attaching the host?
> * detailed steps to reproduce and the exact inputs to setup networks
The above should answer these questions.

> * the final state of ovirtmgmt as viewed from web-admin
The IPv6 gateway vanished from the SetupNetworks window, the network is in sync.

> * was there another nic on the host with dynamic ipv6\a default ipv6 gateway to begin with. Adding another gateway on ovirtmgmt would cause vdsm to perceive multiple ipv6 gateways on the host which would cause vdsm to return no gateway ("::") to engine.
No, single network, single nic.

[1] https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.3/html/administration_guide/sect-hosts_and_networking#IPv6-networking-support-labels

Sorry for the lack of details in the BZ description, I thought this was simpler. I hope the above helps, let me know if you need more detailed info.

Thanks

Comment 5 Daniel Gur 2019-08-28 13:11:35 UTC
sync2jira

Comment 6 Daniel Gur 2019-08-28 13:15:48 UTC
sync2jira

Comment 7 Dominik Holler 2019-08-28 18:33:22 UTC
Created attachment 1609105 [details]
reproduction without dynamic and IPv4

Germano, thank you very much for reporting this bug in such an amazing way!

I can reproduce the bug by
1. Adding the host to the engine with static IPv6 and DHCP IPv4
  /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE="eth0"
BOOTPROTO="dhcp"
ONBOOT="yes"
TYPE="Ethernet"
USERCTL="yes"
PEERDNS="yes"
PERSISTENT_DHCLIENT="1"
IPV6INIT=yes
IPV6ADDR=fc01::2/64
IPV6_AUTOCONF=no
NM_CONTROLLED=no

see 2018-08-28-ovirt-43-host9.log for details.
This triggers
HostSetupNetworksVDSCommand(HostName = ovirt-43-host9, HostSetupNetworksVdsCommandParameters:{hostId='b2bc5203-011c-4ab6-b4f8-dcc16eb06dea', vds='Host[ovirt-43-host9,b2bc5203-011c-4ab6-b4f8-dcc16eb06dea]', rollbackOnFailure='true', commitOnSuccess='false', connectivityTimeout='120', networks='[HostNetwork:{defaultRoute='true', bonding='false', networkName='ovirtmgmt', vdsmName='ovirtmgmt', nicName='eth0', vlan='null', vmNetwork='true', stp='false', properties='null', ipv4BootProtocol='DHCP', ipv4Address='null', ipv4Netmask='null', ipv4Gateway='null', ipv6BootProtocol='STATIC_IP', ipv6Address='fc01::2', ipv6Prefix='64', ipv6Gateway='::', nameServers='null'}]', removedNetworks='[]', bonds='[]', removedBonds='[]', clusterSwitchType='LEGACY', managementNetworkChanged='true'})
and results in 0.png

This works like expected.

2. Removing IPv4 via Engine triggers
HostSetupNetworksVDSCommand(HostName = ovirt-43-host9, HostSetupNetworksVdsCommandParameters:{hostId='b2bc5203-011c-4ab6-b4f8-dcc16eb06dea', vds='Host[ovirt-43-host9,b2bc5203-011c-4ab6-b4f8-dcc16eb06dea]', rollbackOnFailure='true', commitOnSuccess='true', connectivityTimeout='120', networks='[HostNetwork:{defaultRoute='true', bonding='false', networkName='ovirtmgmt', vdsmName='ovirtmgmt', nicName='eth0', vlan='null', vmNetwork='true', stp='false', properties='[]', ipv4BootProtocol='NONE', ipv4Address='null', ipv4Netmask='null', ipv4Gateway='null', ipv6BootProtocol='STATIC_IP', ipv6Address='fc01::2', ipv6Prefix='64', ipv6Gateway='null', nameServers='null'}]', removedNetworks='[]', bonds='[]', removedBonds='[]', clusterSwitchType='LEGACY', managementNetworkChanged='true'})
and results in 1.png.
Like expected, the host has now no default gw anymore.


3. Adding fc01::3 as gateway triggers
HostSetupNetworksVDSCommand(HostName = ovirt-43-host9, HostSetupNetworksVdsCommandParameters:{hostId='b2bc5203-011c-4ab6-b4f8-dcc16eb06dea', vds='Host[ovirt-43-host9,b2bc5203-011c-4ab6-b4f8-dcc16eb06dea]', rollbackOnFailure='true', commitOnSuccess='true', connectivityTimeout='120', networks='[HostNetwork:{defaultRoute='true', bonding='false', networkName='ovirtmgmt', vdsmName='ovirtmgmt', nicName='eth0', vlan='null', vmNetwork='true', stp='false', properties='[]', ipv4BootProtocol='NONE', ipv4Address='null', ipv4Netmask='null', ipv4Gateway='null', ipv6BootProtocol='STATIC_IP', ipv6Address='fc01::2', ipv6Prefix='64', ipv6Gateway='fc01::3', nameServers='null'}]', removedNetworks='[]', bonds='[]', removedBonds='[]', clusterSwitchType='LEGACY', managementNetworkChanged='true'}),
results in 2.png.
This works like expected and shows, that it is possible to set an IPv6 gateway, if no default gateway (neither IPv4 nor IPv6) was set before.
Please note that the host is static IPv6 and neither IPv4 nor dynamic configuration is used anymore.

4. Changing the gateway to fc01::4 triggers
2019-08-28 18:07:18,240+02 INFO  [org.ovirt.engine.core.bll.network.host.HostSetupNetworksCommand] (default task-23) [cc70db93-7992-4f29-8203-ddd0a44258c5] Lock acquired, from now a monitoring of host will be skipped for host 'ovirt-43-host9' from data-center 'ipv6_dc'
2019-08-28 18:07:18,244+02 WARN  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-23) [cc70db93-7992-4f29-8203-ddd0a44258c5] EVENT_ID: NETWORK_REMOVING_IPV6_GATEWAY_FROM_OLD_DEFAULT_ROUTE_ROLE_ATTACHMENT(10,926), On cluster ipv6_cluster the 'Default Route Role' network is no longer network ovirtmgmt. The IPv6 gateway is being removed from this network.
2019-08-28 18:07:18,245+02 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HostSetupNetworksVDSCommand] (default task-23) [cc70db93-7992-4f29-8203-ddd0a44258c5] START, HostSetupNetworksVDSCommand(HostName = ovirt-43-host9, HostSetupNetworksVdsCommandParameters:{hostId='b2bc5203-011c-4ab6-b4f8-dcc16eb06dea', vds='Host[ovirt-43-host9,b2bc5203-011c-4ab6-b4f8-dcc16eb06dea]', rollbackOnFailure='true', commitOnSuccess='true', connectivityTimeout='120', networks='[HostNetwork:{defaultRoute='true', bonding='false', networkName='ovirtmgmt', vdsmName='ovirtmgmt', nicName='eth0', vlan='null', vmNetwork='true', stp='false', properties='[]', ipv4BootProtocol='NONE', ipv4Address='null', ipv4Netmask='null', ipv4Gateway='null', ipv6BootProtocol='STATIC_IP', ipv6Address='fc01::2', ipv6Prefix='64', ipv6Gateway='null', nameServers='null'}]', removedNetworks='[]', bonds='[]', removedBonds='[]', clusterSwitchType='LEGACY', managementNetworkChanged='true'}), log id: 77984674

and results in 3.png.
This shows that updating the IPv6 gateway does not work on Engine.

Eitan and Michael, do you still have doubts that this bug could be related to dynamic IP configuration or dual-stack?

Comment 19 Roni 2019-12-10 09:47:42 UTC
Verified on RHV v4.4.0-0.6.master.el7
non-hosted-engine environment

rhvh-4.4.0.10-0.20191204.0+1
vdsm-4.40.0-154.git4e13ea9.el8ev.x86_64
ovirt-engine-4.4.0-0.6.master.el7.noarch

Comment 21 RHV bug bot 2019-12-13 13:16:19 UTC
WARN: Bug status (VERIFIED) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops: Bug status (VERIFIED) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops

Comment 22 RHV bug bot 2019-12-20 17:45:47 UTC
WARN: Bug status (VERIFIED) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops: Bug status (VERIFIED) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops

Comment 23 RHV bug bot 2020-01-08 14:49:41 UTC
WARN: Bug status (VERIFIED) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops: Bug status (VERIFIED) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops

Comment 24 RHV bug bot 2020-01-08 15:18:02 UTC
WARN: Bug status (VERIFIED) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops: Bug status (VERIFIED) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops

Comment 25 RHV bug bot 2020-01-24 19:51:29 UTC
WARN: Bug status (VERIFIED) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops: Bug status (VERIFIED) wasn't changed but the folowing should be fixed:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops

Comment 27 Rolfe Dlugy-Hegwer 2020-03-03 11:04:28 UTC
Hi Dominic, please review the updated contents of the Doc Text field and requires_doc_text flag.

Comment 28 Dominik Holler 2020-03-03 11:06:52 UTC
(In reply to Rolfe Dlugy-Hegwer from comment #27)
> Hi Dominic, please review the updated contents of the Doc Text field and
> requires_doc_text flag.

Doc Text is fine, thanks.

Comment 31 errata-xmlrpc 2020-08-04 13:20:09 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: RHV Manager (ovirt-engine) 4.4 security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:3247


Note You need to log in before you can comment on or make changes to this bug.