Bug 1531967
| Summary: | ansible cannot log into hosts where sshd was configured by FreeIPA | ||||||
|---|---|---|---|---|---|---|---|
| Product: | [oVirt] ovirt-engine | Reporter: | bugs | ||||
| Component: | Host-Deploy | Assignee: | Ondra Machacek <omachace> | ||||
| Status: | CLOSED DUPLICATE | QA Contact: | Pavel Stehlik <pstehlik> | ||||
| Severity: | high | Docs Contact: | |||||
| Priority: | low | ||||||
| Version: | --- | CC: | bugs, bugs, mperina, omachace | ||||
| Target Milestone: | --- | ||||||
| Target Release: | --- | ||||||
| Hardware: | x86_64 | ||||||
| OS: | Linux | ||||||
| Whiteboard: | |||||||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |||||
| Doc Text: | Story Points: | --- | |||||
| Clone Of: | Environment: | ||||||
| Last Closed: | 2018-01-09 15:39:27 UTC | Type: | Bug | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | Infra | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Embargoed: | |||||||
| Attachments: |
|
||||||
|
Description
bugs
2018-01-06 23:04:32 UTC
Can you please share the upgrade log? What exactly failed to upgrade? I suppose that during ovirt-engine setup you chose OVN (notice, not OVS, as the latter is available per cluster, not setup). This means that OVN is going to be configured on new clusters where it is set as a provider. Unless explicitly requested by you, ovirt-provider-ovn would not be attached to existing clusters, and OVN would not be configured there. Those existing hosts should be working fine with openvswitch disabled. Please elaborate what is not working for you. Dan My current cluster is set to use bridged networking and I answered yes to set up the OVN provider on my hosted-engine. So with this configuration: 1. Should openvswitch even be configured on my hosts in the cluster that is configured to use bridged networking. 2. In my opinion it would avoid confusion if VDSM was able to figure out that openvswitch is not running and start it, or ignore the error. My overall issue is that when I go into the webinterface click on a host, then go to installation upgrade, it fails. In the event log I get "Failed to upgrade host" Created attachment 1378728 [details]
requested upgrade.log
So looking into my engine logs a little more I found out that it was ansible not being able to login to my ovirt nodes.
2018-01-08 19:45:53,128 p=3892 u=ovirt | Using /usr/share/ovirt-engine/playbooks/ansible.cfg as config file
2018-01-08 19:45:53,338 p=3892 u=ovirt | PLAY [all] *********************************************************************
2018-01-08 19:45:53,365 p=3892 u=ovirt | TASK [ovirt-host-upgrade : Install ovirt-host package if it isn't installed] ***
2018-01-08 19:45:53,554 p=3892 u=ovirt | fatal: [hyp1.example.com]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh_exchange_identification: Connection closed by remote host\r\n", "unreachable": true}
2018-01-08 19:45:53,555 p=3892 u=ovirt | PLAY RECAP *********************************************************************
2018-01-08 19:45:53,555 p=3892 u=ovirt | hyp1.example.com : ok=0 changed=0 unreachable=1 failed=0
commenting out the line below in /etc/ssh/ssh_config, which FreeIPA installs fixes the issue.
ProxyCommand /usr/bin/sss_ssh_knownhostsproxy -p %p %h
So it seems this is unrelated to OVS; I don't know if it's a bug or merits only a release note. Let the infra team decide. (In reply to bugs from comment #3) > Dan > > My current cluster is set to use bridged networking and I answered yes to > set up the OVN provider on my hosted-engine. > > So with this configuration: > > 1. Should openvswitch even be configured on my hosts in the cluster that is > configured to use bridged networking. No, as the existing cluster does not have OVN as its external network provider. > 2. In my opinion it would avoid confusion if VDSM was able to figure out > that openvswitch is not running and start it, or ignore the error. I believe that this is indeed the case. If you find that it is not, please provide the {super,}vdsm.log showing the error in a fresh bug. *** This bug has been marked as a duplicate of bug 1529851 *** bugs Thanks for finding out the root cause, I've closed this one. And will solve the root cause in bz #1529851. |