Bug 1756244 - On dual stack env hosted-engine deploy chooses IPv6 just due to a link-local IPv6 address
Summary: On dual stack env hosted-engine deploy chooses IPv6 just due to a link-local ...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-hosted-engine-setup
Version: 4.3.5
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: ovirt-4.4.0
: 4.4.0
Assignee: Dominik Holler
QA Contact: Roni
URL:
Whiteboard:
: 1746585 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-09-27 07:54 UTC by Juan Orti
Modified: 2020-07-13 19:32 UTC (History)
6 users (show)

Fixed In Version: ovirt-ansible-hosted-engine-setup-1.1.1 ovirt-hosted-engine-setup-2.4.4
Doc Type: Bug Fix
Doc Text:
Previously, in an IPv4-only host with a .local FQDN, the deployment kept looping searching for an available IPv6 prefix until it failed. This was because the hosted-engine setup picked a link-local IP address for the host. The hosted-engine setup could not ensure that the Engine and the host are on the same subnet when one of them used a link-local address. The Engine must not use on a link-local address to be reachable through a routed network. The current release fixes this issue: Even if the hostname is resolved to a link-local IP address, the hosted-engine setup ignores the link-local IP addresses and tries to use another IP address as the address for the host. The hosted-engine can deploy on hosts, even if the hostname is resolved to a link-local address.
Clone Of:
Environment:
Last Closed: 2020-03-30 13:27:35 UTC
oVirt Team: Integration
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github oVirt ovirt-ansible-hosted-engine-setup pull 308 0 None closed Avoid link local as host IP address 2020-12-16 22:11:56 UTC
Red Hat Knowledge Base (Solution) 4449501 0 None None None 2019-09-27 07:54:31 UTC
oVirt gerrit 100359 0 master MERGED Omit non-global IP addresses 2020-12-16 22:11:24 UTC

Description Juan Orti 2019-09-27 07:54:32 UTC
Description of problem:
In a IPv4-only host with .local FQDN, the deployment keeps looping searching for an available IPv6 prefix until it fails.
The DNS is resolving its IPv6 link-local addresses.

Version-Release number of selected component (if applicable):
ovirt-ansible-engine-setup-1.1.9-1.el7ev.noarch
ovirt-ansible-hosted-engine-setup-1.0.26-1.el7ev.noarch
ovirt-ansible-repositories-1.1.5-1.el7ev.noarch
ovirt-hosted-engine-ha-2.3.3-1.el7ev.noarch
ovirt-hosted-engine-setup-2.3.11-1.el7ev.noarch
ovirt-host-4.3.4-1.el7ev.x86_64
ovirt-host-dependencies-4.3.4-1.el7ev.x86_64
ovirt-host-deploy-common-1.8.0-1.el7ev.noarch
vdsm-4.30.24-2.el7ev.x86_64
redhat-release-virtualization-host-4.3.5-4.el7ev.x86_64

How reproducible:
Always

Steps to Reproduce:
1. RHVH 4.3.5 host with FQDN rhvm.example.local

2. The DNS is resolving this:
2019-09-26 11:19:28,377+0400 DEBUG var changed: host "localhost" var "hostname_resolution_output" type "<type 'dict'>" value: "{
    "changed": true, 
    "cmd": "getent ahosts rhvm.example.local | grep STREAM | cat", 
    "delta": "0:00:00.027634", 
    "end": "2019-09-26 11:19:27.059575", 
    "failed": false, 
    "rc": 0, 
    "start": "2019-09-26 11:19:27.031941", 
    "stderr": "", 
    "stderr_lines": [], 
    "stdout": "fe80::a:b:c:d STREAM rhvm.example.local\nfe80::a:b:c:d STREAM \n10.0.0.1      STREAM ", 
    "stdout_lines": [
        "fe80::a:b:c:d STREAM rhvm.example.local", 
        "fe80::a:b:c:d STREAM ", 
        "10.0.0.1      STREAM "
    ]
}"

3. /etc/nsswitch.conf:
hosts:      files dns myhostname

4. /etc/resolv.conf:
# Generated by NetworkManager
search example.local
nameserver 10.1.2.3

5. hosted-engine --deploy


Actual results:
The Ansible playbook keeps looping searching for a prefix until it fails.

~~~
2019-09-26 20:20:46,393+0400 DEBUG var changed: host "localhost" var "result" type "<type 'dict'>" value: "{
    "changed": true, 
    "cmd": "ip -6 route get fd00:1234:1045:900::1 | grep \"via\" | cat", 
    "delta": "0:00:00.011143", 
    "end": "2019-09-26 20:20:45.073646", 
    "failed": false, 
    "rc": 0, 
    "start": "2019-09-26 20:20:45.062503", 
    "stderr": "", 
    "stderr_lines": [], 
    "stdout": "", 
    "stdout_lines": []
}"
2019-09-26 20:20:46,393+0400 INFO ansible ok {'status': 'OK', 'ansible_type': 'task', 'ansible_task': u'Get ip route', 'task_duration': 3, 'ansible_host': u'localhost', 'ansible_playbook': u'/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml'}
2019-09-26 20:20:46,394+0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7f75bdd06f90> kwargs 
2019-09-26 20:20:47,727+0400 INFO ansible task start {'status': 'OK', 'ansible_task': u'ovirt.hosted_engine_setup : debug', 'ansible_playbook': u'/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml', 'ansible_type': 'task'}
2019-09-26 20:20:47,727+0400 DEBUG ansible on_any args TASK: ovirt.hosted_engine_setup : debug kwargs is_conditional:False 
2019-09-26 20:20:47,727+0400 DEBUG ansible on_any args localhostTASK: ovirt.hosted_engine_setup : debug kwargs 
2019-09-26 20:20:48,987+0400 INFO ansible ok {'status': 'OK', 'ansible_type': 'task', 'ansible_task': u'', 'task_duration': 2, 'ansible_host': u'localhost', 'ansible_playbook': u'/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml'}
2019-09-26 20:20:48,987+0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7f75bd8e3d90> kwargs 
2019-09-26 20:20:50,387+0400 INFO ansible task start {'status': 'OK', 'ansible_task': u"ovirt.hosted_engine_setup : Fail if can't find an available subnet", 'ansible_playbook': u'/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml', 'ansible_type': 'task'}
2019-09-26 20:20:50,387+0400 DEBUG ansible on_any args TASK: ovirt.hosted_engine_setup : Fail if can't find an available subnet kwargs is_conditional:False 
2019-09-26 20:20:50,387+0400 DEBUG ansible on_any args localhostTASK: ovirt.hosted_engine_setup : Fail if can't find an available subnet kwargs 
2019-09-26 20:20:51,708+0400 INFO ansible skipped {'status': 'SKIPPED', 'ansible_task': u"Fail if can't find an available subnet", 'ansible_host': u'localhost', 'ansible_playbook': u'/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml', 'ansible_type': 'task'}
2019-09-26 20:20:51,709+0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7f75bdd064d0> kwargs 
2019-09-26 20:20:53,041+0400 INFO ansible task start {'status': 'OK', 'ansible_task': u'ovirt.hosted_engine_setup : Set new IPv6 subnet prefix', 'ansible_playbook': u'/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml', 'ansible_type': 'task'}
2019-09-26 20:20:53,041+0400 DEBUG ansible on_any args TASK: ovirt.hosted_engine_setup : Set new IPv6 subnet prefix kwargs is_conditional:False 
2019-09-26 20:20:53,042+0400 DEBUG ansible on_any args localhostTASK: ovirt.hosted_engine_setup : Set new IPv6 subnet prefix kwargs 
2019-09-26 20:20:54,289+0400 INFO ansible skipped {'status': 'SKIPPED', 'ansible_task': u'Set new IPv6 subnet prefix', 'ansible_host': u'localhost', 'ansible_playbook': u'/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml', 'ansible_type': 'task'}
2019-09-26 20:20:54,289+0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7f75bdc30110> kwargs 
2019-09-26 20:20:55,621+0400 INFO ansible task start {'status': 'OK', 'ansible_task': u'ovirt.hosted_engine_setup : Search again with another prefix', 'ansible_playbook': u'/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml', 'ansible_type': 'task'}
2019-09-26 20:20:55,622+0400 DEBUG ansible on_any args TASK: ovirt.hosted_engine_setup : Search again with another prefix kwargs is_conditional:False 
2019-09-26 20:20:55,622+0400 DEBUG ansible on_any args localhostTASK: ovirt.hosted_engine_setup : Search again with another prefix kwargs 
2019-09-26 20:20:56,868+0400 INFO ansible ok {'status': 'OK', 'ansible_type': 'task', 'ansible_task': u'Search again with another prefix', 'task_duration': 2, 'ansible_host': u'localhost', 'ansible_playbook': u'/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml'}
2019-09-26 20:20:56,869+0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7f75bd847a10> kwargs 
2019-09-26 20:20:57,019+0400 DEBUG ansible on_any args /usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/search_available_network_subnet.yaml (args={} vars={}): [localhost] kwargs 
2019-09-26 20:20:58,390+0400 INFO ansible task start {'status': 'OK', 'ansible_task': u'ovirt.hosted_engine_setup : Define 3rd chunk', 'ansible_playbook': u'/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml', 'ansible_type': 'task'}
2019-09-26 20:20:58,391+0400 DEBUG ansible on_any args TASK: ovirt.hosted_engine_setup : Define 3rd chunk kwargs is_conditional:False 
2019-09-26 20:20:58,391+0400 DEBUG ansible on_any args localhostTASK: ovirt.hosted_engine_setup : Define 3rd chunk kwargs 
2019-09-26 20:20:59,646+0400 INFO ansible skipped {'status': 'SKIPPED', 'ansible_task': u'Define 3rd chunk', 'ansible_host': u'localhost', 'ansible_playbook': u'/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml', 'ansible_type': 'task'}
2019-09-26 20:20:59,647+0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7f75bdc3e090> kwargs 
2019-09-26 20:21:00,983+0400 INFO ansible task start {'status': 'OK', 'ansible_task': u'ovirt.hosted_engine_setup : Set 3rd chunk', 'ansible_playbook': u'/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml', 'ansible_type': 'task'}
2019-09-26 20:21:00,983+0400 DEBUG ansible on_any args TASK: ovirt.hosted_engine_setup : Set 3rd chunk kwargs is_conditional:False 
2019-09-26 20:21:00,984+0400 DEBUG ansible on_any args localhostTASK: ovirt.hosted_engine_setup : Set 3rd chunk kwargs 
2019-09-26 20:21:02,229+0400 INFO ansible skipped {'status': 'SKIPPED', 'ansible_task': u'Set 3rd chunk', 'ansible_host': u'localhost', 'ansible_playbook': u'/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml', 'ansible_type': 'task'}
2019-09-26 20:21:02,229+0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7f75bdc3e090> kwargs 
2019-09-26 20:21:03,560+0400 INFO ansible task start {'status': 'OK', 'ansible_task': u'ovirt.hosted_engine_setup : debug', 'ansible_playbook': u'/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml', 'ansible_type': 'task'}
2019-09-26 20:21:03,561+0400 DEBUG ansible on_any args TASK: ovirt.hosted_engine_setup : debug kwargs is_conditional:False 
2019-09-26 20:21:03,561+0400 DEBUG ansible on_any args localhostTASK: ovirt.hosted_engine_setup : debug kwargs 
2019-09-26 20:21:04,889+0400 INFO ansible skipped {'status': 'SKIPPED', 'ansible_task': '', 'ansible_host': u'localhost', 'ansible_playbook': u'/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml', 'ansible_type': 'task'}
2019-09-26 20:21:04,890+0400 DEBUG ansible on_any args <ansible.executor.task_result.TaskResult object at 0x7f75bd96fb50> kwargs 
2019-09-26 20:21:06,277+0400 INFO ansible task start {'status': 'OK', 'ansible_task': u'ovirt.hosted_engine_setup : Get ip route', 'ansible_playbook': u'/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml', 'ansible_type': 'task'}
2019-09-26 20:21:06,278+0400 DEBUG ansible on_any args TASK: ovirt.hosted_engine_setup : Get ip route kwargs is_conditional:False 
2019-09-26 20:21:06,278+0400 DEBUG ansible on_any args localhostTASK: ovirt.hosted_engine_setup : Get ip route kwargs 
2019-09-26 20:21:07,529+0400 DEBUG var changed: host "localhost" var "result" type "<type 'dict'>" value: "{
    "changed": false, 
    "skip_reason": "Conditional result was False", 
    "skipped": true
}"
~~~

Expected results:
The host only has link-local IPv6 address, so it should do IPv4 only


Additional info:

Comment 3 Simone Tiraboschi 2019-09-27 10:54:49 UTC
On dual stack environments, by default hosted-engine-setup will choose to deploy on IPv4 or IPv6 according to the first result on fqdn resolution.

We have --4 or --6 options (and the same on cockpit in the advanced section) to force IPv4 or IPv6 on dual stack environments.

[root@tiramd1 ~]# hosted-engine --deploy --help
Usage: /sbin/hosted-engine --deploy [args]
    Run ovirt-hosted-engine deployment.

    --config-append=<file>
        Load extra configuration files.
    --generate-answer=<file>
        Generate answer file.
    --restore-from-file=file
         Restore an engine backup file during the deployment
    --4
        Force IPv4 on dual stack env
    --6
        Force IPv6 on dual stack env

In this case the issue is that it choose IPv6 just because of the link-local IPv6 address.

Comment 4 Sandro Bonazzola 2019-10-02 07:06:34 UTC
*** Bug 1746585 has been marked as a duplicate of this bug. ***

Comment 5 Juan Orti 2019-10-02 07:45:51 UTC
I don't think this is because of having IPv6 link-local address, as all the systems have it unless explicitly disabled.
This is because of the DNS server resolving the IPv6 link-local address of the host:

/usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/bootstrap_local_vm/01_prepare_routing_rules.yml

  - name: Check IPv6
    set_fact:
      ipv6_deployment: >-
        {{ true if he_host_ip not in target_address_v4.stdout_lines and
        he_host_ip in target_address_v6.stdout_lines
        else false }}

Comment 6 Yedidyah Bar David 2020-02-12 08:23:32 UTC
I think we might need another patch, for cockpit ui. Still need to check.

Comment 7 Sandro Bonazzola 2020-03-11 15:58:36 UTC
We are past 4.3.9 freeze, moving out to 4.4.0.

Comment 8 Dominik Holler 2020-03-17 11:02:41 UTC
(No) Hint for verification:
This bug is triggered if the output of 
getent ahosts $(hostname -f)
is similar to the ouput of
echo -e "fe80::f292:1cff:fe07:83e4 STREAM hetest.virt\nfe80::f292:1cff:fe07:83e4 STREAM \n192.168.122.159      STREAM "
but unfortunately I found no way to behave getent like this.
Probably it is possible to modify the host name resolution config to do so, /etc/nsswitch.conf would be a good point to start, but I gave up.
I even tried my own DNS server without success.

Comment 13 Michael Burman 2020-03-30 13:27:35 UTC
Clean verification is very complicated and not justified by this edge case scenario.
Closing as current release
If you believe that this issue wasn't addressed, please reopen. 

Fixed in ovirt-ansible-hosted-engine-setup-1.1.1 ovirt-hosted-engine-setup-2.4.4


Note You need to log in before you can comment on or make changes to this bug.