Bug 1406067 - [RFE] have the option to install hosted engine on specific datacenter and cluster.
Summary: [RFE] have the option to install hosted engine on specific datacenter and clu...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: ovirt-hosted-engine-setup
Classification: oVirt
Component: RFEs
Version: 2.1.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ovirt-4.2.7-1
: ---
Assignee: Simone Tiraboschi
QA Contact: Polina
URL:
Whiteboard:
: 1455163 (view as bug list)
Depends On:
Blocks: ovirt-hosteded-engine-setup-2.2.32
TreeView+ depends on / blocked
 
Reported: 2016-12-19 16:00 UTC by Kobi Hakimi
Modified: 2021-05-01 16:46 UTC (History)
8 users (show)

Fixed In Version: ovirt-hosted-engine-setup-2.2.31-1
Clone Of:
Environment:
Last Closed: 2018-11-13 16:13:00 UTC
oVirt Team: Integration
Embargoed:
rule-engine: ovirt-4.2?
rule-engine: planning_ack?
sbonazzo: devel_ack+
rule-engine: testing_ack+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
oVirt gerrit 95168 0 master MERGED deploy: support custom datacenter and cluster names 2020-12-15 08:42:59 UTC
oVirt gerrit 95181 0 ovirt-hosted-engine-setup-2.2 MERGED deploy: support custom datacenter and cluster names 2020-12-15 08:42:59 UTC

Description Kobi Hakimi 2016-12-19 16:00:00 UTC
Description of problem:
[RFE] we don't have the option to install hosted engine on specific datacenter and cluster.

Version-Release number of selected component (if applicable):
rhevm-appliance-20161214.0-1
 Red Hat Virtualization Manager Version: 4.0.4.4-0.1.el7ev 

How reproducible:
100%

Steps to Reproduce:
1. install the hosted engine

Actual results:
There is no option to install it on specific datacenter and cluster.
By default it will install it on Default datacenter and Default cluster.

Expected results:
To be able to install the hosted engine on specific datacenter and cluster
by running deploy or with the answer file.

Additional info:
W/A for this RFE:
1. Install the hosted engine on Default datacenter and cluster.
2. Rename the datacenter and cluster.
3. Create new Default datacenter and new Default cluster.

Comment 1 Yaniv Kaul 2016-12-20 07:03:36 UTC
I'm not sure why as part of the installation process, we are not asking for the data center name, cluster name, etc. I've longed wished for this to happen. Clearly in the cockpit-based wizard this could be easily done.

Comment 2 Simone Tiraboschi 2016-12-22 16:22:08 UTC
On the technical side, just because 'Default' is pretty hard-coded in different place in the sql code we use to initialize the DB so neither engine-setup is currently able to set different names for initial entities.

See for instance:
https://gerrit.ovirt.org/gitweb?p=ovirt-engine.git;a=blob;f=packaging/dbscripts/data/00300_insert_storage_pool.sql;h=dd801f92b643b2ab671a7e3179a67380efb288e6;hb=refs/heads/master#l10
or
https://gerrit.ovirt.org/gitweb?p=ovirt-engine.git;a=blob;f=packaging/dbscripts/inst_sp.sql;h=493435e33a3cdd9acfd0c31b24fceac0d884da3e;hb=refs/heads/master#l14

Maybe a simpler option is letting engine-setup create them as 'Default' and them having hosted-engine-setup renaming the inital entities via REST API.

Comment 3 Kobi Hakimi 2016-12-25 09:36:01 UTC
IMHO it would be better to save the Default datacenter/cluster
and in hosted-engine deploy process add something like:

Please specify the datacenter name you would like to create [Default]:


Please specify the cluster name you would like to create [Default]:

Comment 4 Yaniv Kaul 2017-11-16 14:11:39 UTC
How is this RFE affected in the new 4.2 installation flow (and wizard) ?

Comment 5 Simone Tiraboschi 2017-11-16 14:17:49 UTC
(In reply to Yaniv Kaul from comment #4)
> How is this RFE affected in the new 4.2 installation flow (and wizard) ?

Current code is still not ready for this, but now it could be easily addressed with the new flow.

Comment 6 Simone Tiraboschi 2017-12-19 17:15:12 UTC
*** Bug 1455163 has been marked as a duplicate of this bug. ***

Comment 7 Yaniv Lavi 2018-06-06 12:30:30 UTC
Closing old RFEs. Please reopen if needed.
Patches are welcome.

Comment 8 Kobi Hakimi 2018-06-06 13:35:12 UTC
If it could be easy as Simone wrote in comment #5 https://bugzilla.redhat.com/show_bug.cgi?id=1406067#c5
it will make it easier to automate the deploy in the right way without WA.

Comment 9 Yaniv Kaul 2018-06-06 14:01:30 UTC
(In reply to Kobi Hakimi from comment #8)
> If it could be easy as Simone wrote in comment #5
> https://bugzilla.redhat.com/show_bug.cgi?id=1406067#c5
> it will make it easier to automate the deploy in the right way without WA.

Are you sending a patch?

Comment 10 Kobi Hakimi 2018-06-07 11:49:53 UTC
NP I'll do it and you'll do the automation tickets ;-)

Comment 11 Yaniv Kaul 2018-06-07 12:01:37 UTC
(In reply to Kobi Hakimi from comment #10)
> NP I'll do it and you'll do the automation tickets ;-)

[ykaul@ykaul ovirt-system-tests]$ git shortlog -sn
   197  Yaniv Kaul
   135  gbenhaim
    80  Eyal Edri
    72  Leon Goldberg
    54  David Caro
    45  Daniel Belenky
    38  Ondřej Svoboda
    38  Sandro Bonazzola
    35  Dima Kuznetsov
    21  Dominik Holler
    21  Nadav Goldin
    21  Yedidyah Bar David
    19  Lev Veyde

Comment 12 Polina 2018-10-25 15:23:32 UTC
Hi, I'd like to add some note for this issue following testing of backup/restore.
I performed the following test:
1. created the backup file for the HE environment with three hosts in golden_env_mixed_1.
engine-backup --mode=backup --file=backup_compute-he-4 --log=log_compute-he-4_backup4.2
2. As result of deploying with restore from this backup file I've got an environment with one host with HE VM in the default cluster, and two other hosts in the original cluster.
The user can't work is such an environment and there is no simple obvious way to have all the hosts in the original cluster and have the restored environment.

Comment 13 Polina 2018-10-26 11:24:21 UTC
Continuing the Comment 12:
While creating the backup file the environment had some running VMs (regular and high available with lease). After the restore was completed all the VMs running on the host1 (where the deployment restore process was run)  are just removed from the environment (their disks are present in Storage/Disks, but VMs are not).
 
There are some steps to be done to have all three hosts in the same original cluster (see **steps below). As result of these steps all three hosts with the score 3400 are in the same cluster, but The HE VM is not migratable -  engine VM for some strange reason is still listed in default cluster.

**the steps after restore: 
  take one of the two hosts and set it into maintenance mode.
  Move it to the default cluster.
  Reinstall it from the engine choosing hosted-engine deploy
  Set it to maintenance mode again
  Move it to the target cluster. activate
  Set hosted-engine global maintenance mode.
  Shutdown the engine VM from the first host with hosted-engine --vm-shutdown
  Manually start it on the second host with hosted-engine --vm-start
  set the first host in maintenance mode
  migrate it to the second cluster
  activate it

Comment 14 Polina 2018-11-05 13:36:56 UTC
The bug verified on 
ovirt-hosted-engine-setup-2.2.31-1.el7ev.noarch
ovirt-engine-4.2.7.4-0.1.el7ev.noarch

Comment 15 Polina 2018-11-07 14:32:35 UTC
the verification was done running 'hosted-engine --deploy --restore-from-file=backup_compute-he-4'. I had a choice for both DC and cluster and successfully deployed.
But the verification must be done also for deploy from scratch.

Comment 17 Sandro Bonazzola 2018-11-13 16:13:00 UTC
This bugzilla is included in oVirt 4.2.7 Async 1 release, published on November 13th 2018.

Since the problem described in this bug report should be resolved in oVirt 4.2.7 release, it has been closed with a resolution of CURRENT RELEASE.

If the solution does not work for you, please open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.