Description of problem: So even 3.6ngn is detected as vds_type=1 in up-to-date host-deploy it still cannot be added into 3.6 cluster in 4.1 engine as it doesn't have collectd. (i heard rumours there should be per cluster version list of packages but i can find only list of packages for checkupdate and not for deployment.) ~~~ [root@jbelka-vm2 yum.repos.d]# grep 'ovirt-node' /var/log/ovirt-engine/host-deploy/ovirt-host-deploy-20170317162055-10.34.63.223-17c2c387.log 2017-03-17 16:20:49 DEBUG otopi.context context.dumpEnvironment:770 ENV VDSM/ovirt-node=bool:'True' 2017-03-17 16:20:54 DEBUG otopi.plugins.otopi.dialog.machine dialog.__logString:204 DIALOG:RECEIVE env-get -k VDSM/ovirt-node 2017-03-17 16:20:54 DEBUG otopi.plugins.otopi.dialog.machine dialog.__logString:204 DIALOG:SEND ***D:VALUE VDSM/ovirt-node=bool:True 2017-03-17 16:20:54 DEBUG otopi.context context.dumpEnvironment:770 ENV VDSM/ovirt-node=bool:'True' 2017-03-17 16:20:55 DEBUG otopi.context context.dumpEnvironment:770 ENV VDSM/ovirt-node=bool:'True' ^^^^ # grep 'vdsm.*version' /var/log/ovirt-engine/host-deploy/ovirt-host-deploy-20170317162055-10.34.63.223-17c2c387.log 2017-03-17 16:20:54 DEBUG otopi.plugins.ovirt_host_deploy.vdsm.packages packages._validation:84 Found vdsm {'display_name': 'vdsm-4.17.38-1.el7ev.noarch', 'name': 'vdsm', 'epoch': '0', 'version': '4.17.38', 'release': '1.el7ev', 'operation': 'installed', 'arch': 'noarch'} # grep 'executeMethod.*otopi.plugins.ovirt_host_deploy.*package' /var/log/ovirt-engine/host-deploy/ovirt-host-deploy-20170317162055-10.34.63.223-17c2c387.log 2017-03-17 16:20:50 DEBUG otopi.context context._executeMethod:128 Stage init METHOD otopi.plugins.ovirt_host_deploy.core.offlinepackager.Plugin._init 2017-03-17 16:20:50 DEBUG otopi.context context._executeMethod:128 Stage init METHOD otopi.plugins.ovirt_host_deploy.gluster.packages.Plugin._setup 2017-03-17 16:20:50 DEBUG otopi.context context._executeMethod:128 Stage init METHOD otopi.plugins.ovirt_host_deploy.hosted-engine.packages.Plugin._init 2017-03-17 16:20:50 DEBUG otopi.context context._executeMethod:128 Stage init METHOD otopi.plugins.ovirt_host_deploy.kdump.packages.Plugin._init 2017-03-17 16:20:50 DEBUG otopi.context context._executeMethod:128 Stage init METHOD otopi.plugins.ovirt_host_deploy.vdsm.packages.Plugin._init 2017-03-17 16:20:50 DEBUG otopi.context context._executeMethod:128 Stage init METHOD otopi.plugins.ovirt_host_deploy.vmconsole.packages.Plugin._init 2017-03-17 16:20:52 DEBUG otopi.context context._executeMethod:128 Stage setup METHOD otopi.plugins.ovirt_host_deploy.vdsm.packages.Plugin._setup 2017-03-17 16:20:53 DEBUG otopi.context context._executeMethod:128 Stage internal_packages METHOD otopi.plugins.ovirt_host_deploy.vdsm.bridge.Plugin._internal_packages 2017-03-17 16:20:53 DEBUG otopi.context context._executeMethod:128 Stage internal_packages METHOD otopi.plugins.ovirt_host_deploy.vdsm.vdsmid.Plugin._packages 2017-03-17 16:20:54 DEBUG otopi.context context._executeMethod:128 Stage customization METHOD otopi.plugins.ovirt_host_deploy.kdump.packages.Plugin._customization 2017-03-17 16:20:54 DEBUG otopi.context context._executeMethod:128 Stage customization METHOD otopi.plugins.ovirt_host_deploy.vmconsole.packages.Plugin._customization 2017-03-17 16:20:54 DEBUG otopi.context context._executeMethod:128 Stage validation METHOD otopi.plugins.ovirt_host_deploy.gluster.packages.Plugin._validation 2017-03-17 16:20:54 DEBUG otopi.context context._executeMethod:128 Stage validation METHOD otopi.plugins.ovirt_host_deploy.hosted-engine.packages.Plugin._validation 2017-03-17 16:20:54 DEBUG otopi.context context._executeMethod:128 Stage validation METHOD otopi.plugins.ovirt_host_deploy.kdump.packages.Plugin._validate 2017-03-17 16:20:54 DEBUG otopi.context context._executeMethod:128 Stage validation METHOD otopi.plugins.ovirt_host_deploy.vdsm.packages.Plugin._validation 2017-03-17 16:20:54 DEBUG otopi.context context._executeMethod:128 Stage validation METHOD otopi.plugins.ovirt_host_deploy.vmconsole.packages.Plugin._validation 2017-03-17 16:20:55 DEBUG otopi.context context._executeMethod:128 Stage packages METHOD otopi.plugins.ovirt_host_deploy.collectd.packages.Plugin._packages engine=# select vds_name,host_name,vds_type,rpm_version,supported_cluster_levels,supported_engines from vds; vds_name | host_name | vds_type | rpm_version | supported_cluster_levels | supported_engines ----------------+--------------+----------+-------------+--------------------------+------------------- dell-r210ii-04 | 10.34.63.223 | 1 | | | (1 row) ~~~ Version-Release number of selected component (if applicable): ovirt-engine-4.1.1.5-0.1.el7.noarch ovirt-host-deploy-1.6.3-1.el7ev.noarch How reproducible: 100% Steps to Reproduce: 1. have 4.1 engine with 3.6 cluster 2. add 3.6 ngn into 3.6 cluster 3. Actual results: installation fails Expected results: seems it should work Additional info:
collectd is added to package list inside host-deploy itself regardless of cluster level version which is host is able to support (based on VDSM version).
IIRC we discussed this flow before adding collectd/fluentd to hosts and decided to keep it that way - that you should only install new hosts as 4.1. Yaniv?
Agreed - why should we add new hosts that are old?
OK, closing for now. Please reopen if needed.
Yaniv - shouldn't this be clearly documented?
Didi, can you add a known issue text?
*** Bug 1438347 has been marked as a duplicate of this bug. ***
(In reply to Yaniv Kaul from comment #4) > Agreed - why should we add new hosts that are old? There are users that control which yum repositories their hosts see via Foreman/Satellite, in order to protect themselves from unintended upgrades. I would like this bug to be fixed, so that we allow them to upgrade Engine first, see that things work, while keeping with 4.0.z upgrades/installation, add 4.2 cluster, and only finally, enable 4.2 in their production hosts.
(In reply to Dan Kenigsberg from comment #9) > (In reply to Yaniv Kaul from comment #4) > > Agreed - why should we add new hosts that are old? > > There are users that control which yum repositories their hosts see via > Foreman/Satellite, in order to protect themselves from unintended upgrades. > > I would like this bug to be fixed, so that we allow them to upgrade Engine > first, see that things work, while keeping with 4.0.z upgrades/installation, > add 4.2 cluster, and only finally, enable 4.2 in their production hosts. This is a NGN bug and only affect adding new nodes. I don't understand the need to resolve this. Can you please elaborate?
I got to this bug because of bug 1438347 was marked as its dup. bug 1438347 was seen on a plain el7 host, unrelated to ngn.
Can we move the package to update into the host-deploy that is version and arch specific? Having engine only request an update and not list which packages to upgrade.
(In reply to Yaniv Dary from comment #12) > Can we move the package to update into the host-deploy that is version and > arch specific? Having engine only request an update and not list which > packages to upgrade. It will not happen for 4.1.1-1, for which we do want the Known Issue, so please open a new bug to discuss that. It was already agreed to not do that (comment 4), but we can always change our mind. I'd personally do not do that. IMO it should be the opposite - the engine should send the list of packages, and this list should be built also based on version/arch (if needed, as well as perhaps other factors). I'd also change the host-deploy logic to be the same. Currently host-mgmt (what the Update Manager runs) works this way, while host-deploy does not (it has the list of metrics packages hard-coded in itself and without conditions). If you want host-deploy to decide on this, then the engine should provide host-deploy information it does not have currently (version, arch etc.), so this still requires a change in both of them.
There has been no change/code to be tested here. Doc Text clearly states it is expected behavior. Either WONTFIX or ship a code to be tested to change original reported behavior. I'm for the first one.
Target release should be placed once a package build is known to fix a issue. Since this bug is not modified, the target version has been reset. Please use target milestone to plan a fix for a oVirt release.
(In reply to Jiri Belka from comment #14) > There has been no change/code to be tested here. Actually there was - I changed the bug type to Known Issue. Am I missing any extra step needed for adding a Known Issue? Some other flag? > Doc Text clearly states it > is expected behavior. Right. Current bug is only to add a Known Issue. Please verify that the "workaround" in the doc text works for you (or whatever other process you have for QE management of Known Issue bugs) and move to VERIFIED. > > Either WONTFIX or ship a code to be tested to change original reported > behavior. I'm for the first one.
(In reply to Yedidyah Bar David from comment #16) > (In reply to Jiri Belka from comment #14) > > There has been no change/code to be tested here. > > Actually there was - I changed the bug type to Known Issue. > > Am I missing any extra step needed for adding a Known Issue? Some other flag? > > > Doc Text clearly states it > > is expected behavior. > > Right. Current bug is only to add a Known Issue. > > Please verify that the "workaround" in the doc text works for you (or > whatever other process you have for QE management of Known Issue bugs) and > move to VERIFIED. > > > > > Either WONTFIX or ship a code to be tested to change original reported > > behavior. I'm for the first one.