Description of problem: While migrating 3.5 SHE on EL6 to 3.5 SHE on EL7 (and then to 3.6 SHE env) and following https://access.redhat.com/solutions/2351141 my additional host ended in the same cluster as EL6 hosts - ie. it seems hosted-engine --deploy lost feature to ask for cluster name. (Thus the upgrade process is buggy at step '3.' of detailed steps of the KB link above.) Version-Release number of selected component (if applicable): ovirt-host-deploy-offline-1.3.0-3.el7ev.x86_64 ovirt-hosted-engine-setup-1.2.6.1-1.el7ev.noarch ovirt-hosted-engine-ha-1.2.10-1.el7ev.noarch ovirt-host-deploy-1.3.2-1.el7ev.noarch Red Hat Enterprise Virtualization Hypervisor release 7.2 (20160219.0.el7ev) How reproducible: 100% Steps to Reproduce: 1. add EL7 as an additional node to 3.5 SHE env with EL6 hosts 2. hosted-engine --deploy 3. see if there is question for cluster name (you created new cluster for this EL7 node in advance) Actual results: no such question, the host ends in Default cluster as non-operational Expected results: there should be question for cluster name (according to the above KB) Additional info: https://access.redhat.com/solutions/2351141
What is the right env variable to resolve this problem?
(In reply to Yaniv Lavi (Dary) from comment #2) > What is the right env variable to resolve this problem? OVEHOSTED_ENGINE/clusterName But we need to write that value in the answer-file we load from another host since it's also there and that one gets loaded after local initial answer-file overwriting it.
Just in case, I'd like to remind that there is a known workaround for this: 1-set none-responsive host in to maintenance in UI. 2-edit the host and set it to belong to desired host cluster. 3-activate the host and wait until it becomes active.
This will be fixed with using in cluster migration policy in bug #1503446.