Bug 1433434 - 3.6ngn cannot be added into 3.6 cluster in 4.1 engine
Summary: 3.6ngn cannot be added into 3.6 cluster in 4.1 engine
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: ovirt-host-deploy
Classification: oVirt
Component: General
Version: master
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ovirt-4.1.1-1
: ---
Assignee: Yedidyah Bar David
QA Contact: Jiri Belka
URL:
Whiteboard:
: 1438347 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-03-17 16:06 UTC by Jiri Belka
Modified: 2017-04-21 11:04 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: Known Issue
Doc Text:
In Red Hat Virtualization 4.1, when the Manager deploys a host, collectd is always installed; however, host deployment will fail if you are attempting to deploy a new or reinstalled version 3.y host (in a cluster with 3.6 compatibility level), because collectd is not shipped in the 3.y repositories. To avoid this, ensure that you install and deploy any version 3.y hosts prior to upgrading the Manager to 4.1. Note that after the Manager upgrade, these hosts will continue to work, but you will not be able to reinstall them without first upgrading them to version 4.1.
Clone Of:
Environment:
Last Closed: 2017-04-21 09:48:14 UTC
oVirt Team: Integration
Embargoed:
rule-engine: ovirt-4.1+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1444450 0 unspecified CLOSED 3.6ngn cannot be reinstalled on 4.1 engine 2021-02-22 00:41:40 UTC

Internal Links: 1444450

Description Jiri Belka 2017-03-17 16:06:56 UTC
Description of problem:

So even 3.6ngn is detected as vds_type=1 in up-to-date host-deploy it still cannot be added into 3.6 cluster in 4.1 engine as it doesn't have collectd.

(i heard rumours there should be per cluster version list of packages but i can find only list of packages for checkupdate and not for deployment.)

~~~
[root@jbelka-vm2 yum.repos.d]# grep 'ovirt-node' /var/log/ovirt-engine/host-deploy/ovirt-host-deploy-20170317162055-10.34.63.223-17c2c387.log 
2017-03-17 16:20:49 DEBUG otopi.context context.dumpEnvironment:770 ENV VDSM/ovirt-node=bool:'True'
2017-03-17 16:20:54 DEBUG otopi.plugins.otopi.dialog.machine dialog.__logString:204 DIALOG:RECEIVE    env-get -k VDSM/ovirt-node
2017-03-17 16:20:54 DEBUG otopi.plugins.otopi.dialog.machine dialog.__logString:204 DIALOG:SEND       ***D:VALUE VDSM/ovirt-node=bool:True
2017-03-17 16:20:54 DEBUG otopi.context context.dumpEnvironment:770 ENV VDSM/ovirt-node=bool:'True'
2017-03-17 16:20:55 DEBUG otopi.context context.dumpEnvironment:770 ENV VDSM/ovirt-node=bool:'True'
                      ^^^^

# grep 'vdsm.*version' /var/log/ovirt-engine/host-deploy/ovirt-host-deploy-20170317162055-10.34.63.223-17c2c387.log 
2017-03-17 16:20:54 DEBUG otopi.plugins.ovirt_host_deploy.vdsm.packages packages._validation:84 Found vdsm {'display_name': 'vdsm-4.17.38-1.el7ev.noarch', 'name': 'vdsm', 'epoch': '0', 'version': '4.17.38', 'release': '1.el7ev', 'operation': 'installed', 'arch': 'noarch'}

# grep 'executeMethod.*otopi.plugins.ovirt_host_deploy.*package' /var/log/ovirt-engine/host-deploy/ovirt-host-deploy-20170317162055-10.34.63.223-17c2c387.log 
2017-03-17 16:20:50 DEBUG otopi.context context._executeMethod:128 Stage init METHOD otopi.plugins.ovirt_host_deploy.core.offlinepackager.Plugin._init
2017-03-17 16:20:50 DEBUG otopi.context context._executeMethod:128 Stage init METHOD otopi.plugins.ovirt_host_deploy.gluster.packages.Plugin._setup
2017-03-17 16:20:50 DEBUG otopi.context context._executeMethod:128 Stage init METHOD otopi.plugins.ovirt_host_deploy.hosted-engine.packages.Plugin._init
2017-03-17 16:20:50 DEBUG otopi.context context._executeMethod:128 Stage init METHOD otopi.plugins.ovirt_host_deploy.kdump.packages.Plugin._init
2017-03-17 16:20:50 DEBUG otopi.context context._executeMethod:128 Stage init METHOD otopi.plugins.ovirt_host_deploy.vdsm.packages.Plugin._init
2017-03-17 16:20:50 DEBUG otopi.context context._executeMethod:128 Stage init METHOD otopi.plugins.ovirt_host_deploy.vmconsole.packages.Plugin._init
2017-03-17 16:20:52 DEBUG otopi.context context._executeMethod:128 Stage setup METHOD otopi.plugins.ovirt_host_deploy.vdsm.packages.Plugin._setup
2017-03-17 16:20:53 DEBUG otopi.context context._executeMethod:128 Stage internal_packages METHOD otopi.plugins.ovirt_host_deploy.vdsm.bridge.Plugin._internal_packages
2017-03-17 16:20:53 DEBUG otopi.context context._executeMethod:128 Stage internal_packages METHOD otopi.plugins.ovirt_host_deploy.vdsm.vdsmid.Plugin._packages
2017-03-17 16:20:54 DEBUG otopi.context context._executeMethod:128 Stage customization METHOD otopi.plugins.ovirt_host_deploy.kdump.packages.Plugin._customization
2017-03-17 16:20:54 DEBUG otopi.context context._executeMethod:128 Stage customization METHOD otopi.plugins.ovirt_host_deploy.vmconsole.packages.Plugin._customization
2017-03-17 16:20:54 DEBUG otopi.context context._executeMethod:128 Stage validation METHOD otopi.plugins.ovirt_host_deploy.gluster.packages.Plugin._validation
2017-03-17 16:20:54 DEBUG otopi.context context._executeMethod:128 Stage validation METHOD otopi.plugins.ovirt_host_deploy.hosted-engine.packages.Plugin._validation
2017-03-17 16:20:54 DEBUG otopi.context context._executeMethod:128 Stage validation METHOD otopi.plugins.ovirt_host_deploy.kdump.packages.Plugin._validate
2017-03-17 16:20:54 DEBUG otopi.context context._executeMethod:128 Stage validation METHOD otopi.plugins.ovirt_host_deploy.vdsm.packages.Plugin._validation
2017-03-17 16:20:54 DEBUG otopi.context context._executeMethod:128 Stage validation METHOD otopi.plugins.ovirt_host_deploy.vmconsole.packages.Plugin._validation
2017-03-17 16:20:55 DEBUG otopi.context context._executeMethod:128 Stage packages METHOD otopi.plugins.ovirt_host_deploy.collectd.packages.Plugin._packages

engine=# select vds_name,host_name,vds_type,rpm_version,supported_cluster_levels,supported_engines from vds;
    vds_name    |  host_name   | vds_type | rpm_version | supported_cluster_levels | supported_engines 
----------------+--------------+----------+-------------+--------------------------+-------------------
 dell-r210ii-04 | 10.34.63.223 |        1 |             |                          | 
(1 row)
~~~

Version-Release number of selected component (if applicable):
ovirt-engine-4.1.1.5-0.1.el7.noarch
ovirt-host-deploy-1.6.3-1.el7ev.noarch


How reproducible:
100%

Steps to Reproduce:
1. have 4.1 engine with 3.6 cluster
2. add 3.6 ngn into 3.6 cluster
3.

Actual results:
installation fails

Expected results:
seems it should work

Additional info:

Comment 2 Martin Perina 2017-03-18 11:39:55 UTC
collectd is added to package list inside host-deploy itself regardless of cluster level version which is host is able to support (based on VDSM version).

Comment 3 Yedidyah Bar David 2017-03-19 10:59:00 UTC
IIRC we discussed this flow before adding collectd/fluentd to hosts and decided to keep it that way - that you should only install new hosts as 4.1. Yaniv?

Comment 4 Yaniv Kaul 2017-03-19 11:16:46 UTC
Agreed - why should we add new hosts that are old?

Comment 5 Yedidyah Bar David 2017-03-19 11:59:45 UTC
OK, closing for now. Please reopen if needed.

Comment 6 Yaniv Kaul 2017-03-19 12:33:53 UTC
Yaniv - shouldn't this be clearly documented?

Comment 7 Yaniv Lavi 2017-03-19 13:37:59 UTC
Didi, can you add a known issue text?

Comment 8 Yedidyah Bar David 2017-04-03 07:26:32 UTC
*** Bug 1438347 has been marked as a duplicate of this bug. ***

Comment 9 Dan Kenigsberg 2017-04-04 11:06:16 UTC
(In reply to Yaniv Kaul from comment #4)
> Agreed - why should we add new hosts that are old?

There are users that control which yum repositories their hosts see via Foreman/Satellite, in order to protect themselves from unintended upgrades.

I would like this bug to be fixed, so that we allow them to upgrade Engine first, see that things work, while keeping with 4.0.z upgrades/installation, add 4.2 cluster, and only finally, enable 4.2 in their production hosts.

Comment 10 Yaniv Lavi 2017-04-06 12:29:03 UTC
(In reply to Dan Kenigsberg from comment #9)
> (In reply to Yaniv Kaul from comment #4)
> > Agreed - why should we add new hosts that are old?
> 
> There are users that control which yum repositories their hosts see via
> Foreman/Satellite, in order to protect themselves from unintended upgrades.
> 
> I would like this bug to be fixed, so that we allow them to upgrade Engine
> first, see that things work, while keeping with 4.0.z upgrades/installation,
> add 4.2 cluster, and only finally, enable 4.2 in their production hosts.

This is a NGN bug and only affect adding new nodes.
I don't understand the need to resolve this. Can you please elaborate?

Comment 11 Dan Kenigsberg 2017-04-06 12:37:28 UTC
I got to this bug because of bug 1438347 was marked as its dup. bug 1438347 was seen on a plain el7 host, unrelated to ngn.

Comment 12 Yaniv Lavi 2017-04-09 12:39:43 UTC
Can we move the package to update into the host-deploy that is version and arch specific? Having engine only request an update and not list which packages to upgrade.

Comment 13 Yedidyah Bar David 2017-04-09 13:21:13 UTC
(In reply to Yaniv Dary from comment #12)
> Can we move the package to update into the host-deploy that is version and
> arch specific? Having engine only request an update and not list which
> packages to upgrade.

It will not happen for 4.1.1-1, for which we do want the Known Issue, so please open a new bug to discuss that. It was already agreed to not do that (comment 4), but we can always change our mind.

I'd personally do not do that. IMO it should be the opposite - the engine should send the list of packages, and this list should be built also based on version/arch (if needed, as well as perhaps other factors). I'd also change the host-deploy logic to be the same. Currently host-mgmt (what the Update Manager runs) works this way, while host-deploy does not (it has the list of metrics packages hard-coded in itself and without conditions).

If you want host-deploy to decide on this, then the engine should provide host-deploy information it does not have currently (version, arch etc.), so this still requires a change in both of them.

Comment 14 Jiri Belka 2017-04-10 12:26:12 UTC
There has been no change/code to be tested here. Doc Text clearly states it is expected behavior.

Either WONTFIX or ship a code to be tested to change original reported behavior. I'm for the first one.

Comment 15 Red Hat Bugzilla Rules Engine 2017-04-10 12:26:19 UTC
Target release should be placed once a package build is known to fix a issue. Since this bug is not modified, the target version has been reset. Please use target milestone to plan a fix for a oVirt release.

Comment 16 Yedidyah Bar David 2017-04-12 10:08:37 UTC
(In reply to Jiri Belka from comment #14)
> There has been no change/code to be tested here.

Actually there was - I changed the bug type to Known Issue.

Am I missing any extra step needed for adding a Known Issue? Some other flag?

> Doc Text clearly states it
> is expected behavior.

Right. Current bug is only to add a Known Issue.

Please verify that the "workaround" in the doc text works for you (or whatever other process you have for QE management of Known Issue bugs) and move to VERIFIED.

> 
> Either WONTFIX or ship a code to be tested to change original reported
> behavior. I'm for the first one.

Comment 17 Jiri Belka 2017-04-12 10:45:09 UTC
(In reply to Yedidyah Bar David from comment #16)
> (In reply to Jiri Belka from comment #14)
> > There has been no change/code to be tested here.
> 
> Actually there was - I changed the bug type to Known Issue.
> 
> Am I missing any extra step needed for adding a Known Issue? Some other flag?
> 
> > Doc Text clearly states it
> > is expected behavior.
> 
> Right. Current bug is only to add a Known Issue.
> 
> Please verify that the "workaround" in the doc text works for you (or
> whatever other process you have for QE management of Known Issue bugs) and
> move to VERIFIED.
> 
> > 
> > Either WONTFIX or ship a code to be tested to change original reported
> > behavior. I'm for the first one.


Note You need to log in before you can comment on or make changes to this bug.