Bug 1534432 - Node-Zero: check for valid bond names and active interfaces
Summary: Node-Zero: check for valid bond names and active interfaces
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: ovirt-hosted-engine-setup
Classification: oVirt
Component: General
Version: 2.2.0
Hardware: Unspecified
OS: Unspecified
unspecified
low
Target Milestone: ovirt-4.2.2
: ---
Assignee: Ido Rosenzwig
QA Contact: Nikolai Sednev
URL:
Whiteboard:
Depends On:
Blocks: 1458709
TreeView+ depends on / blocked
 
Reported: 2018-01-15 09:05 UTC by Ido Rosenzwig
Modified: 2018-03-29 10:56 UTC (History)
1 user (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2018-03-29 10:56:02 UTC
oVirt Team: Integration
Embargoed:
rule-engine: ovirt-4.2+
sbonazzo: devel_ack+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
oVirt gerrit 86388 0 master ABANDONED ansible: Add network interfaces filter for ovirtmgmt bridge 2018-02-12 07:07:04 UTC
oVirt gerrit 86411 0 master MERGED ansible: Add check for active interface and bond naming format 2018-09-03 09:36:33 UTC

Description Ido Rosenzwig 2018-01-15 09:05:08 UTC
Description of problem:
On Node-Zero deployment invalid bond names can be used (bond name that do not start with "bond" and some numbers afterwards.

In addition, inactive interfaces that have IPv4 or IPv6 address (like static ones) can be chosen as well for the ovirtmgmt bridge

Version-Release number of selected component (if applicable):


How reproducible:
100%

Steps to Reproduce:
1.create a bond with an invalid name or configure interface with static IP
2.hosted-engine --deploy --ansible
3.

Actual results:
the invalid interfaces can be chosen

Expected results:
the invalid interfaces should not be presented to the user

Comment 1 Nikolai Sednev 2018-02-22 09:45:00 UTC
Forth to our conversation with Ido, this is working as designed now.
I've statically configured IPv4 on enp5s0f1 and activated it, it was shown during deployment as designed.
I've activated and configured bond with two slaves eno1 and eno2 and named it with improper name "vsya1pupkin2", it was hidded during deployment as designed.
Moving to verified as agreed.

Configurations:
alma03 ~]# ifconfig -a
eno1: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST>  mtu 1500
        ether e0:db:55:fc:cf:43  txqueuelen 1000  (Ethernet)
        RX packets 143  bytes 24104 (23.5 KiB)
        RX errors 0  dropped 2  overruns 0  frame 0
        TX packets 767  bytes 105826 (103.3 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device memory 0x91720000-9173ffff  

eno2: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST>  mtu 1500
        ether e0:db:55:fc:cf:43  txqueuelen 1000  (Ethernet)
        RX packets 156  bytes 26896 (26.2 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 751  bytes 89608 (87.5 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device memory 0x91700000-9171ffff  

enp5s0f0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.35.92.3  netmask 255.255.252.0  broadcast 10.35.95.255
        inet6 fe80::a236:9fff:fe3a:c4f0  prefixlen 64  scopeid 0x20<link>
        inet6 2620:52:0:235c:a236:9fff:fe3a:c4f0  prefixlen 64  scopeid 0x0<global>
        ether a0:36:9f:3a:c4:f0  txqueuelen 1000  (Ethernet)
        RX packets 2174  bytes 169654 (165.6 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 475  bytes 80427 (78.5 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp5s0f1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 1.1.1.1  netmask 255.255.255.255  broadcast 1.1.1.1
        inet6 2620:52:0:2344:ad0c:e358:1496:641f  prefixlen 64  scopeid 0x0<global>
        inet6 fe80::b725:4e09:f9e8:fe78  prefixlen 64  scopeid 0x20<link>
        ether a0:36:9f:3a:c4:f2  txqueuelen 1000  (Ethernet)
        RX packets 5043  bytes 311118 (303.8 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 100  bytes 7393 (7.2 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 32  bytes 2912 (2.8 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 32  bytes 2912 (2.8 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

virbr0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 192.168.122.1  netmask 255.255.255.0  broadcast 192.168.122.255
        ether 52:54:00:76:68:6f  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

virbr0-nic: flags=4098<BROADCAST,MULTICAST>  mtu 1500
        ether 52:54:00:76:68:6f  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vsya1pupkin2: flags=5187<UP,BROADCAST,RUNNING,MASTER,MULTICAST>  mtu 1500
        inet6 fe80::7155:7ab:9d5f:2440  prefixlen 64  scopeid 0x20<link>
        ether e0:db:55:fc:cf:43  txqueuelen 1000  (Ethernet)
        RX packets 20  bytes 3424 (3.3 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 61  bytes 7892 (7.7 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0


[ INFO  ] TASK [filter bonds with bad naming]
          Please indicate a nic to set ovirtmgmt bridge on: (enp5s0f1, enp5s0f0) [enp5s0f1]:

ovirt-hosted-engine-ha-2.2.6-1.el7ev.noarch
ovirt-hosted-engine-setup-2.2.11-1.el7ev.noarch
Red Hat Enterprise Linux Server release 7.5 Beta (Maipo)
Linux alma03.qa.lab.tlv.redhat.com 3.10.0-855.el7.x86_64 #1 SMP Tue Feb 20 06:46:45 EST 2018 x86_64 x86_64 x86_64 GNU/Linux

Comment 2 Sandro Bonazzola 2018-03-29 10:56:02 UTC
This bugzilla is included in oVirt 4.2.2 release, published on March 28th 2018.

Since the problem described in this bug report should be
resolved in oVirt 4.2.2 release, it has been closed with a resolution of CURRENT RELEASE.

If the solution does not work for you, please open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.