Bug 1313916

Summary: we should default to current hostname for host name in engine in initial HE setup
Product: [oVirt] ovirt-hosted-engine-setup Reporter: Yedidyah Bar David <didi>
Component: RFEsAssignee: Ido Rosenzwig <irosenzw>
Status: CLOSED CURRENTRELEASE QA Contact: Nikolai Sednev <nsednev>
Severity: medium Docs Contact:
Priority: medium    
Version: 2.0.0CC: bugs, dfediuck, fdeutsch, mavital, sbonazzo, ylavi
Target Milestone: ovirt-4.1.0-alphaKeywords: FutureFeature, Reopened, Triaged
Target Release: 2.1.0Flags: rule-engine: ovirt-4.1+
gklein: testing_plan_complete+
rule-engine: planning_ack+
dfediuck: devel_ack+
gklein: testing_ack+
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: No Doc Update
Doc Text:
undefined
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-02-01 14:45:46 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Integration RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1403956    
Bug Blocks: 1412024    
Attachments:
Description Flags
Screenshot from 2016-12-19 14-08-27.png none

Description Yedidyah Bar David 2016-03-02 15:31:55 UTC
Description of problem:

          Enter the name which will be used to identify this host inside the Administrator Portal [hosted_engine_2]:

Default should be current hostname and not "hosted_engine_ID"

Comment 1 Yaniv Lavi 2016-06-28 14:26:29 UTC
We will only be supporting adding additional hosts via the UI going forward, therefore closing this bug.

Comment 2 Yaniv Lavi 2016-06-28 14:27:10 UTC
Reopening to address this in bootstrap time only.

Comment 3 Ido Rosenzwig 2016-07-13 09:06:44 UTC
Someone already fixed this bug.

Comment 4 Ido Rosenzwig 2016-07-13 09:47:32 UTC
Reopening. The bug wasn't fixed.

Comment 5 Sandro Bonazzola 2016-07-27 13:37:33 UTC
Moving back to 4.1, failed QA

Comment 6 Sandro Bonazzola 2016-12-12 13:53:10 UTC
The fix for this issue should be included in oVirt 4.1.0 beta 1 released on December 1st. If not included please move back to modified.

Comment 7 Nikolai Sednev 2016-12-12 17:48:47 UTC
During deployment of HE I was not asked to provide host's FQDN and it was taken correctly from host, during the deployment, I've seen it under "--== CONFIGURATION PREVIEW ==--".

Works for me on these components on host:
ovirt-engine-appliance-4.1-20161202.1.el7.centos.noarch
ovirt-imageio-common-0.5.0-0.201611201242.gitb02532b.el7.centos.noarch
ovirt-setup-lib-1.1.0-0.0.master.20161107100014.gitb73abeb.el7.centos.noarch
ovirt-hosted-engine-setup-2.1.0-0.0.master.20161130101611.gitb3ad261.el7.centos.noarch
ovirt-engine-sdk-python-3.6.9.1-1.el7ev.noarch
qemu-kvm-rhev-2.6.0-28.el7_3.2.x86_64
ovirt-host-deploy-1.6.0-0.0.master.20161107121647.gitfd7ddcd.el7.centos.noarch
rhev-release-4.0.6-6-001.noarch
sanlock-3.4.0-1.el7.x86_64
mom-0.5.8-1.el7ev.noarch
ovirt-imageio-daemon-0.5.0-0.201611201242.gitb02532b.el7.centos.noarch
vdsm-4.18.999-1020.git1ff41b1.el7.centos.x86_64
ovirt-vmconsole-host-1.0.4-1.el7ev.noarch
ovirt-release41-pre-4.1.0-0.0.beta.20161201085255.git731841c.el7.centos.noarch
ovirt-hosted-engine-ha-2.1.0-0.0.master.20161130135331.20161130135328.git3541725.el7.centos.noarch
libvirt-client-2.0.0-10.el7_3.2.x86_64
ovirt-vmconsole-1.0.4-1.el7ev.noarch
Linux version 3.10.0-514.2.2.el7.x86_64 (mockbuild.eng.bos.redhat.com) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-11) (GCC) ) #1 SMP Wed Nov 16 13:15:13 EST 2016
Linux 3.10.0-514.2.2.el7.x86_64 #1 SMP Wed Nov 16 13:15:13 EST 2016 x86_64 x86_64 x86_64 GNU/Linux
Red Hat Enterprise Linux Server release 7.3 (Maipo)

Used ovirt-engine-appliance-4.1-20161202.1.el7.centos.noarch for the engine.

Still holding this bug in it's present state, as of 1403956, will have to check that host's FQDN appears correctly in WEBUI after deployment.

Comment 8 Yedidyah Bar David 2016-12-13 07:07:40 UTC
(In reply to Nikolai Sednev from comment #7)
> Still holding this bug in it's present state, as of 1403956, will have to
> check that host's FQDN appears correctly in WEBUI after deployment.

Please note that you should see the _hostname_ (`hostname`) under 'Name'. Under 'Hostname/IP', you should see the result of a reverse-lookup on the IP address of the nic you used to create the management bridge. Please verify when these are different.

Better verify (perhaps routinely, not just for this bug) a somewhat more complex setup, e.g. something like:

* 3 hosts
* each with 3 nics:
- ethmgmt for ovirtmgmt
- ethstorage for gluster storage traffic
- ethvm for VM traffic
* each nic has its own IP address (as applicable, I think you do not need one on ethvm)
* each IP address has reverse-resolution, and the result resolves correctly to that IP address
* each of the names is unique, and also different from `hostname` (and also from `hostname -f`, which should be different too)
* networks are separated
* all configuration is done using the names, not IP addresses
* verify that "all works as expected" - both in 'hosted-engine --vm-status', in the web ui (and api/sdk), and in actual traffic you see on each network.
* verify that you can change the IP address of a nic and everything continues to work (perhaps after a restart of the relevant service, perhaps should be done in local maintenance).

Comment 9 Nikolai Sednev 2016-12-19 12:16:25 UTC
(In reply to Yedidyah Bar David from comment #8)
> (In reply to Nikolai Sednev from comment #7)
> > Still holding this bug in it's present state, as of 1403956, will have to
> > check that host's FQDN appears correctly in WEBUI after deployment.
> 
> Please note that you should see the _hostname_ (`hostname`) under 'Name'.
> Under 'Hostname/IP', you should see the result of a reverse-lookup on the IP
> address of the nic you used to create the management bridge. Please verify
> when these are different.
> 
> Better verify (perhaps routinely, not just for this bug) a somewhat more
> complex setup, e.g. something like:
> 
> * 3 hosts
> * each with 3 nics:
> - ethmgmt for ovirtmgmt
> - ethstorage for gluster storage traffic
> - ethvm for VM traffic
> * each nic has its own IP address (as applicable, I think you do not need
> one on ethvm)
> * each IP address has reverse-resolution, and the result resolves correctly
> to that IP address
> * each of the names is unique, and also different from `hostname` (and also
> from `hostname -f`, which should be different too)
> * networks are separated
> * all configuration is done using the names, not IP addresses
> * verify that "all works as expected" - both in 'hosted-engine --vm-status',
> in the web ui (and api/sdk), and in actual traffic you see on each network.
> * verify that you can change the IP address of a nic and everything
> continues to work (perhaps after a restart of the relevant service, perhaps
> should be done in local maintenance).

For standard deployment this works fine on these components on host:
ovirt-engine-appliance-4.1-20161202.1.el7.centos.noarch
mom-0.5.8-1.el7ev.noarch
ovirt-hosted-engine-setup-2.1.0-0.0.master.20161130101611.gitb3ad261.el7.centos.noarch
ovirt-setup-lib-1.1.0-0.0.master.20161107100014.gitb73abeb.el7.centos.noarch
ovirt-imageio-daemon-0.5.0-0.201611201242.gitb02532b.el7.centos.noarch
ovirt-release41-pre-4.1.0-0.0.beta.20161201085255.git731841c.el7.centos.noarch
sanlock-3.4.0-1.el7.x86_64
qemu-kvm-rhev-2.6.0-28.el7_3.2.x86_64
ovirt-hosted-engine-ha-2.1.0-0.0.master.20161130135331.20161130135328.git3541725.el7.centos.noarch
ovirt-engine-appliance-4.1-20161202.1.el7.centos.noarch
ovirt-host-deploy-1.6.0-0.0.master.20161107121647.gitfd7ddcd.el7.centos.noarch
ovirt-engine-sdk-python-3.6.9.1-1.el7ev.noarch
ovirt-imageio-common-0.5.0-0.201611201242.gitb02532b.el7.centos.noarch
ovirt-vmconsole-host-1.0.4-1.el7ev.noarch
vdsm-4.18.999-1020.git1ff41b1.el7.centos.x86_64
libvirt-client-2.0.0-10.el7_3.2.x86_64
ovirt-vmconsole-1.0.4-1.el7ev.noarch
Linux version 3.10.0-514.2.2.el7.x86_64 (mockbuild.eng.bos.redhat.com) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-11) (GCC) ) #1 SMP Wed Nov 16 13:15:13 EST 2016
Linux 3.10.0-514.2.2.el7.x86_64 #1 SMP Wed Nov 16 13:15:13 EST 2016 x86_64 x86_64 x86_64 GNU/Linux
Red Hat Enterprise Linux Server release 7.3 (Maipo)


On engine:
ovirt-engine-dwh-setup-4.1.0-0.0.master.20161129154019.el7.centos.noarch
ovirt-engine-tools-backup-4.1.0-0.0.master.20161201071307.gita5ff876.el7.centos.noarch
ovirt-engine-userportal-4.1.0-0.0.master.20161201071307.gita5ff876.el7.centos.noarch
ovirt-engine-backend-4.1.0-0.0.master.20161201071307.gita5ff876.el7.centos.noarch
ovirt-engine-setup-plugin-vmconsole-proxy-helper-4.1.0-0.0.master.20161201071307.gita5ff876.el7.centos.noarch
ovirt-engine-4.1.0-0.0.master.20161201071307.gita5ff876.el7.centos.noarch
ovirt-engine-wildfly-10.1.0-1.el7.x86_64
python-ovirt-engine-sdk4-4.1.0-0.1.a0.20161128gitdcd1d90.el7.centos.x86_64
ovirt-engine-wildfly-overlay-10.0.0-1.el7.noarch
ovirt-engine-cli-3.6.9.2-1.el7.centos.noarch
ovirt-engine-extensions-api-impl-4.1.0-0.0.master.20161201071307.gita5ff876.el7.centos.noarch
ovirt-engine-sdk-python-3.6.9.2-0.1.20161130.gite99bbd1.el7.centos.noarch
ovirt-engine-setup-plugin-ovirt-engine-common-4.1.0-0.0.master.20161201071307.gita5ff876.el7.centos.noarch
ovirt-engine-dwh-4.1.0-0.0.master.20161129154019.el7.centos.noarch
ovirt-engine-setup-plugin-websocket-proxy-4.1.0-0.0.master.20161201071307.gita5ff876.el7.centos.noarch
ovirt-engine-dbscripts-4.1.0-0.0.master.20161201071307.gita5ff876.el7.centos.noarch
ovirt-engine-webadmin-portal-4.1.0-0.0.master.20161201071307.gita5ff876.el7.centos.noarch
ovirt-engine-setup-4.1.0-0.0.master.20161201071307.gita5ff876.el7.centos.noarch
ovirt-engine-vmconsole-proxy-helper-4.1.0-0.0.master.20161201071307.gita5ff876.el7.centos.noarch
ovirt-engine-setup-plugin-ovirt-engine-4.1.0-0.0.master.20161201071307.gita5ff876.el7.centos.noarch
ovirt-engine-restapi-4.1.0-0.0.master.20161201071307.gita5ff876.el7.centos.noarch
ovirt-engine-lib-4.1.0-0.0.master.20161201071307.gita5ff876.el7.centos.noarch
ovirt-engine-websocket-proxy-4.1.0-0.0.master.20161201071307.gita5ff876.el7.centos.noarch
ovirt-engine-dashboard-1.1.0-0.4.20161128git5ed6f96.el7.centos.noarch
ovirt-engine-hosts-ansible-inventory-4.1.0-0.0.master.20161201071307.gita5ff876.el7.centos.noarch
ovirt-engine-tools-4.1.0-0.0.master.20161201071307.gita5ff876.el7.centos.noarch
ovirt-engine-extension-aaa-jdbc-1.1.3-0.0.master.20161118164738.gitd0ff686.el7.noarch
ovirt-engine-setup-base-4.1.0-0.0.master.20161201071307.gita5ff876.el7.centos.noarch
Linux version 3.10.0-327.36.3.el7.x86_64 (builder.centos.org) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-4) (GCC) ) #1 SMP Mon Oct 24 16:09:20 UTC 2016
Linux 3.10.0-327.36.3.el7.x86_64 #1 SMP Mon Oct 24 16:09:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
CentOS Linux release 7.2.1511 (Core) 

Regarding adding additional host to environment, that part failed and separate bug being opened on that issue.
Hosted-engine host had 4 NICs and 3 of which were connected to LAB's network with different IPs. The chosen NIC for the deployment was successfully bridged on host with ovirtbridge and received IP address during the deployment, once HE deployment was finished, I've seen correct data via WEBUI.

Moving this bug to verified.

Comment 10 Nikolai Sednev 2016-12-19 12:16:57 UTC
Created attachment 1233369 [details]
Screenshot from 2016-12-19 14-08-27.png

Comment 11 Nikolai Sednev 2016-12-19 12:18:15 UTC
puma18 ~]# ifconfig
enp4s0f0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        ether 00:9c:02:b0:ef:08  txqueuelen 1000  (Ethernet)
        RX packets 2407289  bytes 3391317290 (3.1 GiB)
        RX errors 0  dropped 119  overruns 0  frame 0
        TX packets 2895061  bytes 3916643293 (3.6 GiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp4s0f1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        ether 00:9c:02:b0:ef:0c  txqueuelen 1000  (Ethernet)
        RX packets 14129  bytes 927852 (906.1 KiB)
        RX errors 0  dropped 9  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp5s0f0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        ether 44:1e:a1:73:39:26  txqueuelen 1000  (Ethernet)
        RX packets 19822  bytes 1386087 (1.3 MiB)
        RX errors 0  dropped 2593  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device memory 0xfbe60000-fbe7ffff  

enp5s0f1: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        ether 44:1e:a1:73:39:27  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device memory 0xfbee0000-fbefffff  

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1  (Local Loopback)
        RX packets 21974  bytes 11597253 (11.0 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 21974  bytes 11597253 (11.0 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ovirtmgmt: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.35.160.45  netmask 255.255.255.0  broadcast 10.35.160.255
        ether 00:9c:02:b0:ef:08  txqueuelen 1000  (Ethernet)
        RX packets 185633  bytes 2234422017 (2.0 GiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 172900  bytes 3893787693 (3.6 GiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vnet0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::fc16:3eff:fe7d:dddd  prefixlen 64  scopeid 0x20<link>
        ether fe:16:3e:7d:dd:dd  txqueuelen 1000  (Ethernet)
        RX packets 7099  bytes 12506948 (11.9 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 10802  bytes 2811324 (2.6 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0