Bug 1531967 - ansible cannot log into hosts where sshd was configured by FreeIPA
Summary: ansible cannot log into hosts where sshd was configured by FreeIPA
Keywords:
Status: CLOSED DUPLICATE of bug 1529851
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: Host-Deploy
Version: ---
Hardware: x86_64
OS: Linux
low
high
Target Milestone: ---
: ---
Assignee: Ondra Machacek
QA Contact: Pavel Stehlik
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-01-06 23:04 UTC by bugs
Modified: 2018-01-09 15:41 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-01-09 15:39:27 UTC
oVirt Team: Infra
Embargoed:


Attachments (Terms of Use)
requested upgrade.log (30.97 KB, text/plain)
2018-01-08 20:15 UTC, bugs
no flags Details

Description bugs 2018-01-06 23:04:32 UTC
Description of problem:
I started upgrading an ovirt cluster that I maintain to 4.2. I did answer yes to set up ovs components since that was the default, during engine-setup. However the cluster is still set to use bridged networking and not ovs (experimental).

Version-Release number of selected component (if applicable):
on engine:
ovirt-engine.noarch                          4.2.0.2-1.el7.centos     @ovirt-4.2
ovirt-engine-api-explorer.noarch             0.0.2-1.el7.centos       @ovirt-4.2
ovirt-engine-backend.noarch                  4.2.0.2-1.el7.centos     @ovirt-4.2
ovirt-engine-cli.noarch                      3.6.9.2-1.el7.centos     @ovirt-4.1
ovirt-engine-dashboard.noarch                1.2.0-1.el7.centos       @ovirt-4.2
ovirt-engine-dbscripts.noarch                4.2.0.2-1.el7.centos     @ovirt-4.2
ovirt-engine-dwh.noarch                      4.2.1-1.el7.centos       @ovirt-4.2
ovirt-engine-dwh-setup.noarch                4.2.1-1.el7.centos       @ovirt-4.2
ovirt-engine-extension-aaa-jdbc.noarch       1.1.6-1.el7.centos       @ovirt-4.1
ovirt-engine-extension-aaa-ldap.noarch       1.3.6-1.el7.centos       @ovirt-4.1
ovirt-engine-extension-aaa-ldap-setup.noarch 1.3.6-1.el7.centos       @ovirt-4.1
ovirt-engine-extension-aaa-misc.noarch       1.0.1-1.el7              @ovirt-4.1
ovirt-engine-extensions-api-impl.noarch      4.2.0.2-1.el7.centos     @ovirt-4.2
ovirt-engine-lib.noarch                      4.2.0.2-1.el7.centos     @ovirt-4.2
ovirt-engine-metrics.noarch                  1.1.1-1.el7.centos       @ovirt-4.2
ovirt-engine-restapi.noarch                  4.2.0.2-1.el7.centos     @ovirt-4.2
ovirt-engine-sdk-python.noarch               3.6.9.1-1.el7.centos     @ovirt-4.1
ovirt-engine-setup.noarch                    4.2.0.2-1.el7.centos     @ovirt-4.2
ovirt-engine-setup-base.noarch               4.2.0.2-1.el7.centos     @ovirt-4.2
ovirt-engine-setup-plugin-ovirt-engine.noarch
ovirt-engine-setup-plugin-ovirt-engine-common.noarch
ovirt-engine-setup-plugin-vmconsole-proxy-helper.noarch
ovirt-engine-setup-plugin-websocket-proxy.noarch
ovirt-engine-tools.noarch                    4.2.0.2-1.el7.centos     @ovirt-4.2
ovirt-engine-tools-backup.noarch             4.2.0.2-1.el7.centos     @ovirt-4.2
ovirt-engine-vmconsole-proxy-helper.noarch   4.2.0.2-1.el7.centos     @ovirt-4.2
ovirt-engine-webadmin-portal.noarch          4.2.0.2-1.el7.centos     @ovirt-4.2
ovirt-engine-websocket-proxy.noarch          4.2.0.2-1.el7.centos     @ovirt-4.2
ovirt-engine-wildfly.x86_64                  11.0.0-1.el7.centos      @ovirt-4.2
ovirt-engine-wildfly-overlay.noarch          11.0.1-1.el7.centos      @ovirt-4.2
python-ovirt-engine-sdk4.x86_64              4.2.2-2.el7.centos       @ovirt-4.2

on host:
openvswitch.x86_64                           1:2.7.3-1.1fc27.el7      @ovirt-4.2-centos-ovirt42
openvswitch-ovn-central.x86_64               1:2.7.3-1.1fc27.el7      @ovirt-4.2-centos-ovirt42
openvswitch-ovn-common.x86_64                1:2.7.3-1.1fc27.el7      @ovirt-4.2-centos-ovirt42
openvswitch-ovn-host.x86_64                  1:2.7.3-1.1fc27.el7      @ovirt-4.2-centos-ovirt42
python2-openvswitch.noarch                   1:2.7.3-1.1fc27.el7      @ovirt-4.2-centos-ovirt42
vdsm.x86_64                                  4.20.9.3-1.el7.centos    @ovirt-4.2
vdsm-api.noarch                              4.20.9.3-1.el7.centos    @ovirt-4.2
vdsm-client.noarch                           4.20.9.3-1.el7.centos    @ovirt-4.2
vdsm-hook-ethtool-options.noarch             4.20.9.3-1.el7.centos    @ovirt-4.2
vdsm-hook-fcoe.noarch                        4.20.9.3-1.el7.centos    @ovirt-4.2
vdsm-hook-openstacknet.noarch                4.20.9.3-1.el7.centos    @ovirt-4.2
vdsm-hook-vfio-mdev.noarch                   4.20.9.3-1.el7.centos    @ovirt-4.2
vdsm-hook-vhostmd.noarch                     4.20.9.3-1.el7.centos    @ovirt-4.2
vdsm-hook-vmfex-dev.noarch                   4.20.9.3-1.el7.centos    @ovirt-4.2
vdsm-http.noarch                             4.20.9.3-1.el7.centos    @ovirt-4.2
vdsm-jsonrpc.noarch                          4.20.9.3-1.el7.centos    @ovirt-4.2
vdsm-jsonrpc-java.noarch                     1.4.11-1.el7.centos      @ovirt-4.2
vdsm-python.noarch                           4.20.9.3-1.el7.centos    @ovirt-4.2
vdsm-yajsonrpc.noarch                        4.20.9.3-1.el7.centos    @ovirt-4.2


How reproducible:


Steps to Reproduce:
1.yum install http://resources.ovirt.org/pub/yum-repo/ovirt-release42.rpm
2.yum update
3.hit upgrade in the web interface
4.reboot
5.hist upgrade in the web interface again

Actual results:
# tail -2 /var/log/vdsm/upgrade.log
MainThread::DEBUG::2018-01-06 16:27:59,353::cmdutils::150::root::(exec_cmd) /usr/share/openvswitch/scripts/ovs-ctl status (cwd None)
MainThread::DEBUG::2018-01-06 16:27:59,377::cmdutils::158::root::(exec_cmd) FAILED: <err> = ''; <rc> = 1
# systemctl status openvswitch
● openvswitch.service - Open vSwitch
   Loaded: loaded (/usr/lib/systemd/system/openvswitch.service; disabled; vendor preset: disabled)
   Active: inactive (dead)

Expected results:
the host to upgrade

Additional info:
I don't really want to use openvswitch since it is still marked experimental. Also I was able to activate the host and start/migrate vms on it.

Comment 1 Yaniv Kaul 2018-01-07 08:13:17 UTC
Can you please share the upgrade log?

Comment 2 Dan Kenigsberg 2018-01-07 15:16:15 UTC
What exactly failed to upgrade?

I suppose that during ovirt-engine setup you chose OVN (notice, not OVS, as the latter is available per cluster, not setup). This means that OVN is going to be configured on new clusters where it is set as a provider.

Unless explicitly requested by you, ovirt-provider-ovn would not be attached to existing clusters, and OVN would not be configured there. Those existing hosts should be working fine with openvswitch disabled.

Please elaborate what is not working for you.

Comment 3 bugs 2018-01-08 19:33:27 UTC
Dan

My current cluster is set to use bridged networking and I answered yes to set up the OVN provider on my hosted-engine. 

So with this configuration:

1. Should openvswitch even be configured on my hosts in the cluster that is configured to use bridged networking.
2. In my opinion it would avoid confusion if VDSM was able to figure out that openvswitch is not running and start it, or ignore the error.

My overall issue is that when I go into the webinterface click on a host, then go to installation upgrade, it fails. In the event log I get "Failed to upgrade host"

Comment 4 bugs 2018-01-08 20:15:08 UTC
Created attachment 1378728 [details]
requested upgrade.log

Comment 5 bugs 2018-01-09 02:31:03 UTC
So looking into my engine logs a little more I found out that it was ansible not being able to login to my ovirt nodes.

2018-01-08 19:45:53,128 p=3892 u=ovirt |  Using /usr/share/ovirt-engine/playbooks/ansible.cfg as config file
2018-01-08 19:45:53,338 p=3892 u=ovirt |  PLAY [all] *********************************************************************
2018-01-08 19:45:53,365 p=3892 u=ovirt |  TASK [ovirt-host-upgrade : Install ovirt-host package if it isn't installed] ***
2018-01-08 19:45:53,554 p=3892 u=ovirt |  fatal: [hyp1.example.com]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh_exchange_identification: Connection closed by remote host\r\n", "unreachable": true}
2018-01-08 19:45:53,555 p=3892 u=ovirt |  PLAY RECAP *********************************************************************
2018-01-08 19:45:53,555 p=3892 u=ovirt |  hyp1.example.com : ok=0    changed=0    unreachable=1    failed=0 

commenting out the line below in /etc/ssh/ssh_config, which FreeIPA installs fixes the issue.
ProxyCommand /usr/bin/sss_ssh_knownhostsproxy -p %p %h

Comment 6 Dan Kenigsberg 2018-01-09 12:30:59 UTC
So it seems this is unrelated to OVS; I don't know if it's a bug or merits only a release note. Let the infra team decide.

(In reply to bugs from comment #3)
> Dan
> 
> My current cluster is set to use bridged networking and I answered yes to
> set up the OVN provider on my hosted-engine. 
> 
> So with this configuration:
> 
> 1. Should openvswitch even be configured on my hosts in the cluster that is
> configured to use bridged networking.

No, as the existing cluster does not have OVN as its external network provider.

> 2. In my opinion it would avoid confusion if VDSM was able to figure out
> that openvswitch is not running and start it, or ignore the error.

I believe that this is indeed the case. If you find that it is not, please provide the {super,}vdsm.log showing the error in a fresh bug.

Comment 7 Martin Perina 2018-01-09 14:07:09 UTC
Ondro, could it the the same issue as in BZ1529851?

Comment 8 Ondra Machacek 2018-01-09 15:39:27 UTC

*** This bug has been marked as a duplicate of bug 1529851 ***

Comment 9 Ondra Machacek 2018-01-09 15:41:05 UTC
bugs Thanks for finding out the root cause, I've closed this one. And will solve the root cause in bz #1529851.


Note You need to log in before you can comment on or make changes to this bug.