Description of problem: /etc/sysconfig/networking-scripts/ifcfg-eno1 Contains: IPADDR0=<primary public address> PREFIX0=<prefix for IPADDR0> GATEWAY0=<default gateway> IPADDR1=<additional private address> PREFIX1=<prefix for IPADDR1> When the hosted engine setup creates the ovirtmgmt bridge, it is created with only one IP address and in the case of the above it is the second IP address - IPADDR1. During remote deployment via IPADDR0 the server is then uncontactable, setup fails fatally and server must be recovered from console. Version-Release number of selected component (if applicable): ovirt-3.5 rc1.1 ovirt-hosted-engine-setup-1.2.0-0.1.master.20140820130713.gitd832f86.el7.noarch How reproducible: Always Steps to Reproduce: 1. Have multiple IP configuration on primary Ethernet adaptor 2. Deploy ovirt-hosted-engine using ovirt-hosted-engine-setup Actual results: /etc/sysconfig/network-scripts/ifcfg-ovirtmgmgt: IPADDR=<IP from original IPADDR1 entry> NETMASK=<a netmask> GATEWAY=<Gateway from original GATEWAY0 entry> Expected results: IPADDR0=<as per original ifcfg-eno1 file> PREFIX0=<as per original ifcfg-eno1 file> GATEWAY0=<as per original ifcfg-eno1 file> IPADDR1=<as per original ifcfg-eno1 file> PREFIX1=<as per original ifcfg-eno1 file>
Hosted engine is using vdsm for creating the bridge, moving to vdsm.
I am afraid that oVirt has never supported more than one IP address per network. Could you explain here you motivation for using multiple addresses? If it has a wide audience we can consider this as a future feature.
(In reply to Dan Kenigsberg from comment #2) > I am afraid that oVirt has never supported more than one IP address per > network. > > Could you explain here you motivation for using multiple addresses? If it > has a wide audience we can consider this as a future feature. The hypervisor being configured is in a datacentre environment and has only one Ethernet port connected into a private VLAN. In order to make use of a private subnet for NFS/CTDB (for example) data, the additional IP(s) are added to the primary network interface and therefore to the bridged ovirtmgmt interface in a configured environment. The primary concern is that the default mechanism for assigning IP configurations in network scripts has changed between RHEL6 and RHEL7 to specifically cater for multiple IP environments such that a single IP address is normally assigned using the IPADDR0 and PREFIX0 notations, encouraging the addition of additional addressed. Despite this change in standard, when ovirtmgmt is configured automatically, the ifcfg-ovirtmgmt file is still defined using the old standard mechanism. Assuming vdsm network config was updated to specifically cater for EL7 environments, handling multiple IP configurations via the new mechanism ought to be expected. In terms of actually supporting the configuration even with only one IP supported, the most important thing to consider is that in the very least the correct IP address is assigned to the ovirtmgmt interface. If the GATEWAY is converted from GATEWAY0 then the IPADDR must be converted from IPADDR0 not as was in my case IPADDR1. This assumption that the last encountered variable in the IPADDR(X) list should be the only IP applied to the interface meant that my remote server was disconnected from all remote access and required console recovery. I do no think it unreasonable to assume that at least the IP address in the same subnet as the default gateway should be applied, if not the rest of the configured addresses as well.
Also, I had set a high priority to the bug primarily because VDSM overwrites the network configuration on boot. Therefore I am currently finding it impossible to maintain a second IP in any way as I cannot prevent VDSM from overwriting my corrected ifcfg-ovirtmgmt file.
Thanks, Zordrak. The fact that Vdsm reported and eventually owned the invalid IPADDR1/GATEWAY0 combination is indeed awkward and unfortunate, but I believe that its just one facet of the fact that we assume in too many places in the code that an interface has one IPv4 address at the most. As a mitigation, could you try using the relatively-new after_network_setup hook http://gerrit.ovirt.org/#/c/20330/11/vdsm/vdsmd.8.in ? You can hack a script that adds the second IP address to the ifcfg file, and re-apply it. If you do, please share it here!
As discussed externally I have used after_network_setup to apply the correct ifcfg-ovirtmgmt file via a heredoc, followed by an ifup. The better option would be to use the before_network_setup to trim the ovirtmgmgt from the network operations so as to not touch the file at all - however this is more difficult to implement and is more than I have the patience for at the moment. Could there perhaps be the scope for an additional parameter in the VDSM configuration for networks VDSM should not interfere with which could then be implemented in a very similar manner to the before_network_setup solution?
http://gerrit.ovirt.org/#/c/29738/ suggests a vdsm hook for configuring multiple ipv4 addresses.
Verified on - 4.0.0-0.0.master.20160515171411.git6759e3f.el7.centos with vdsm-4.17.999-1132.git3e4bd0a.el7.centos.x86_64 vdsm-hook-extra-ipv4-addrs-4.17.999-1132.git3e4bd0a.el7.centos.x86_64 - Install vdsm-hook-extra-ipv4-addrs on the host - run engine-config -s 'UserDefinedNetworkCustomProperties=ipv4_addrs=.*' --cver='4.0' - In the oVirt UI edit custom network properties(edit network in the setup networks dialog) and, for the key 'ipv4_addrs' set the extra addresses in the following format: 5.5.5.5/24, 5.5.5.6/24 - On host: 2: n-3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP link/ether 00:14:5e:17:d5:b2 brd ff:ff:ff:ff:ff:ff inet 5.5.5.5/24 brd 5.5.5.255 scope global n-3 valid_lft forever preferred_lft forever inet 5.5.5.6/24 scope global secondary n-3 valid_lft forever preferred_lft forever Note - engine UI doesn't displaying the extra ip/s, only the first one.
oVirt 4.0.0 has been released, closing current release.