Description of problem: I'm using two host: one for the engine, the other as an hypervisor. Both with latest fedora 19 updates. I installed the ovirt engine, release 3.5 RC2, on the first host and from the web admin UI I was trying to add the second host. First attempt generates errors saying that I have to set cluster compatibility level to 3.3; after that I still wasn't able to add my host. Trying to add it into my cluster I'm getting: Status of host f19t12 was set to NonOperational. Gluster command [<UNKNOWN>] failed on server f19t12. Gluster command [<UNKNOWN>] failed on server f19t12. Host f19t12 does not enforce SELinux. That host instead enforces SELinux Version-Release number of selected component (if applicable): 3.5 RC2 How reproducible: 100% Steps to Reproduce: 1. two fedora 19 hosts, install the engine on one of them 2. Downgrade cluster level to 3.3 3. without adding any additional rpms repository on the second host try to add it via the web admin gui Actual results: It shows error about SELinux and Gluster. Expected results: It correctly adds the host or at least it fails advising the user to upgrade vdsm stuff to the required release Additional info:
Would you provide ovirt-host-deploy.log of failed and successful installation? vdsm.log and engine.log are required too, to understand what was deemed wrong.
Created attachment 938901 [details] Not working 1 nw_1: fresh f19, no ovirt 3.5 repo, jsonrpc, failed with 'Host f19t14.localdomain installation failed. Network error during communication with the host.'
Created attachment 938902 [details] Not working 2 fresh f19, no ovirt 3.5 repo, xmlrpc, failed with 'Host f19t14.localdomain is compatible with versions (3.0,3.1,3.2,3.3) and cannot join Cluster Default which is set to version 3.5.'
Created attachment 938903 [details] Not working 3 fresh f19, with ovirt 3.5 repo just relying on host-deploy updates, jsonrpc, failed with 'Host f19t14.localdomain is compatible with versions (3.0,3.1,3.2,3.3) and cannot join Cluster Default which is set to version 3.5.'
Created attachment 938904 [details] Not working 4 fresh f19, with ovirt 3.5 repo just relying on host-deploy updates, adding to a new cluster with 3.3 compatibility mode, xmlrpc, failed with 'Gluster command [<UNKNOWN>] failed on server f19t14.localdomain.'
Created attachment 938907 [details] Working fresh f19, with ovirt 3.5 repo after manual explicit 'yum update', jsonrpc, working
I tried to reproduce on a fresh environment and is 100% reproducible. I have 4 distinct case where it doesn't work and just a working one; I'm attaching the required logs. nw_1: fresh f19, no ovirt 3.5 repo, jsonrpc, failed with 'Host f19t14.localdomain installation failed. Network error during communication with the host.' nw_2: fresh f19, no ovirt 3.5 repo, xmlrpc, failed with 'Host f19t14.localdomain is compatible with versions (3.0,3.1,3.2,3.3) and cannot join Cluster Default which is set to version 3.5.' nw_3: fresh f19, with ovirt 3.5 repo just relying on host-deploy updates, jsonrpc, failed with 'Host f19t14.localdomain is compatible with versions (3.0,3.1,3.2,3.3) and cannot join Cluster Default which is set to version 3.5.' nw_4: fresh f19, with ovirt 3.5 repo just relying on host-deploy updates, adding to a new cluster with 3.3 compatibility mode, xmlrpc, failed with 'Gluster command [<UNKNOWN>] failed on server f19t14.localdomain.' w: fresh f19, with ovirt 3.5 repo after manual explicit 'yum update', jsonrpc, working
Looking at vdsm code: caps.py """ def _getVersionInfo(): # commit bbeb165e42673cddc87495c3d12c4a7f7572013c # added default abort of the VM migration on EIO. # libvirt 1.0.5.8 found in Fedora 19 does not export # that flag, even though it should be present since 1.0.1. if hasattr(libvirt, 'VIR_MIGRATE_ABORT_ON_ERROR'): return dsaversion.version_info logging.error('VIR_MIGRATE_ABORT_ON_ERROR not found in libvirt,' ' support for clusterLevel >= 3.4 is disabled.' ' For Fedora 19 users, please consider upgrading' ' libvirt from the virt-preview repository') from distutils.version import StrictVersion # Workaround: we drop the cluster 3.4+ # compatibility when we run on top of # a libvirt without this flag. info = dsaversion.version_info.copy() maxVer = StrictVersion('3.4') info['clusterLevels'] = [ver for ver in info['clusterLevels'] if StrictVersion(ver) < maxVer] """ It looks like this is a libvirt version implication ? Danken ? Yaniv ?
*** Bug 1142959 has been marked as a duplicate of this bug. ***
Yes, Barak. The libvirt shipped in F19 is not new enough to support clusterLevel 3.4 or 3.5. If someone wants newer feature, he needs to fetch libvirt from virt-preview or upstream. There's nothing we can do about it, apart of waiting for F19's obsolescence.