Description of problem: [RFE] we don't have the option to install hosted engine on specific datacenter and cluster. Version-Release number of selected component (if applicable): rhevm-appliance-20161214.0-1 Red Hat Virtualization Manager Version: 4.0.4.4-0.1.el7ev How reproducible: 100% Steps to Reproduce: 1. install the hosted engine Actual results: There is no option to install it on specific datacenter and cluster. By default it will install it on Default datacenter and Default cluster. Expected results: To be able to install the hosted engine on specific datacenter and cluster by running deploy or with the answer file. Additional info: W/A for this RFE: 1. Install the hosted engine on Default datacenter and cluster. 2. Rename the datacenter and cluster. 3. Create new Default datacenter and new Default cluster.
I'm not sure why as part of the installation process, we are not asking for the data center name, cluster name, etc. I've longed wished for this to happen. Clearly in the cockpit-based wizard this could be easily done.
On the technical side, just because 'Default' is pretty hard-coded in different place in the sql code we use to initialize the DB so neither engine-setup is currently able to set different names for initial entities. See for instance: https://gerrit.ovirt.org/gitweb?p=ovirt-engine.git;a=blob;f=packaging/dbscripts/data/00300_insert_storage_pool.sql;h=dd801f92b643b2ab671a7e3179a67380efb288e6;hb=refs/heads/master#l10 or https://gerrit.ovirt.org/gitweb?p=ovirt-engine.git;a=blob;f=packaging/dbscripts/inst_sp.sql;h=493435e33a3cdd9acfd0c31b24fceac0d884da3e;hb=refs/heads/master#l14 Maybe a simpler option is letting engine-setup create them as 'Default' and them having hosted-engine-setup renaming the inital entities via REST API.
IMHO it would be better to save the Default datacenter/cluster and in hosted-engine deploy process add something like: Please specify the datacenter name you would like to create [Default]: Please specify the cluster name you would like to create [Default]:
How is this RFE affected in the new 4.2 installation flow (and wizard) ?
(In reply to Yaniv Kaul from comment #4) > How is this RFE affected in the new 4.2 installation flow (and wizard) ? Current code is still not ready for this, but now it could be easily addressed with the new flow.
*** Bug 1455163 has been marked as a duplicate of this bug. ***
Closing old RFEs. Please reopen if needed. Patches are welcome.
If it could be easy as Simone wrote in comment #5 https://bugzilla.redhat.com/show_bug.cgi?id=1406067#c5 it will make it easier to automate the deploy in the right way without WA.
(In reply to Kobi Hakimi from comment #8) > If it could be easy as Simone wrote in comment #5 > https://bugzilla.redhat.com/show_bug.cgi?id=1406067#c5 > it will make it easier to automate the deploy in the right way without WA. Are you sending a patch?
NP I'll do it and you'll do the automation tickets ;-)
(In reply to Kobi Hakimi from comment #10) > NP I'll do it and you'll do the automation tickets ;-) [ykaul@ykaul ovirt-system-tests]$ git shortlog -sn 197 Yaniv Kaul 135 gbenhaim 80 Eyal Edri 72 Leon Goldberg 54 David Caro 45 Daniel Belenky 38 Ondřej Svoboda 38 Sandro Bonazzola 35 Dima Kuznetsov 21 Dominik Holler 21 Nadav Goldin 21 Yedidyah Bar David 19 Lev Veyde
Hi, I'd like to add some note for this issue following testing of backup/restore. I performed the following test: 1. created the backup file for the HE environment with three hosts in golden_env_mixed_1. engine-backup --mode=backup --file=backup_compute-he-4 --log=log_compute-he-4_backup4.2 2. As result of deploying with restore from this backup file I've got an environment with one host with HE VM in the default cluster, and two other hosts in the original cluster. The user can't work is such an environment and there is no simple obvious way to have all the hosts in the original cluster and have the restored environment.
Continuing the Comment 12: While creating the backup file the environment had some running VMs (regular and high available with lease). After the restore was completed all the VMs running on the host1 (where the deployment restore process was run) are just removed from the environment (their disks are present in Storage/Disks, but VMs are not). There are some steps to be done to have all three hosts in the same original cluster (see **steps below). As result of these steps all three hosts with the score 3400 are in the same cluster, but The HE VM is not migratable - engine VM for some strange reason is still listed in default cluster. **the steps after restore: take one of the two hosts and set it into maintenance mode. Move it to the default cluster. Reinstall it from the engine choosing hosted-engine deploy Set it to maintenance mode again Move it to the target cluster. activate Set hosted-engine global maintenance mode. Shutdown the engine VM from the first host with hosted-engine --vm-shutdown Manually start it on the second host with hosted-engine --vm-start set the first host in maintenance mode migrate it to the second cluster activate it
The bug verified on ovirt-hosted-engine-setup-2.2.31-1.el7ev.noarch ovirt-engine-4.2.7.4-0.1.el7ev.noarch
the verification was done running 'hosted-engine --deploy --restore-from-file=backup_compute-he-4'. I had a choice for both DC and cluster and successfully deployed. But the verification must be done also for deploy from scratch.
deploy from scratch 'hosted-engine --deploy' is verified on http://download.eng.bos.redhat.com/brewroot/packages/ovirt-hosted-engine-setup/2.2.32/1.el7ev/noarch/ovirt-hosted-engine-setup-2.2.32-1.el7ev.noarch.rpm
This bugzilla is included in oVirt 4.2.7 Async 1 release, published on November 13th 2018. Since the problem described in this bug report should be resolved in oVirt 4.2.7 release, it has been closed with a resolution of CURRENT RELEASE. If the solution does not work for you, please open a new bug report.