Bug 1664777 - migrate hosted-engine to another cluster
Summary: migrate hosted-engine to another cluster
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: BLL.HostedEngine
Version: 4.2.7
Hardware: x86_64
OS: Linux
low
low with 2 votes
Target Milestone: ---
: ---
Assignee: Asaf Rachmani
QA Contact: Nikolai Sednev
URL:
Whiteboard:
Depends On:
Blocks: 1784039
TreeView+ depends on / blocked
 
Reported: 2019-01-09 16:21 UTC by Douglas Duckworth
Modified: 2020-11-17 12:47 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1784039 (view as bug list)
Environment:
Last Closed: 2020-10-29 08:54:37 UTC
oVirt Team: Integration
Embargoed:


Attachments (Terms of Use)
image (50.62 KB, image/png)
2019-01-09 16:21 UTC, Douglas Duckworth
no flags Details
cannot migrate (127.08 KB, image/png)
2019-01-09 16:22 UTC, Douglas Duckworth
no flags Details
cannot change cluster (111.11 KB, image/png)
2019-01-09 16:25 UTC, Douglas Duckworth
no flags Details

Description Douglas Duckworth 2019-01-09 16:21:26 UTC
Created attachment 1519529 [details]
image

Hello

I am trying to migrate my hosted-engine VM to another cluster in the same data center.  Hosts in both clusters have the same logical networks and storage.  Yet migrating the VM isn't an option.

To get the hosted-engine VM on the other cluster I started the VM on host in that other cluster using "hosted-engine --vm-start."  

However HostedEngine still associated with old cluster as shown attached.  So I cannot live migrate the VM.  Does anyone know how to resolve?  With other VMs one can shut them down then using the "Edit" option.  Though that will not work for HostedEngine.

Comment 1 Douglas Duckworth 2019-01-09 16:22:14 UTC
Created attachment 1519530 [details]
cannot migrate

Comment 2 Douglas Duckworth 2019-01-09 16:25:35 UTC
Created attachment 1519531 [details]
cannot change cluster

When I try to change cluster from "SharedStorage" to "SCU," I get the message that the HostedEngine VM must be stopped before changing cluster.  Yet I have no idea how to change the Cluster once VM down.

Comment 3 Doron Fediuck 2019-01-10 10:23:01 UTC
A Cluster is a migration domain whch means no VM can live migrate outside of its cluster.
However it is possible to move a VM between clusters once it's down.
Hosted Engine VM is not a standard one, so it can be started on a different cluster in case of emergency, but not as a standard flow.
I do agree we should find a way to move the HE VM between clusters in a reasonable way.

Comment 4 Douglas Duckworth 2019-01-10 15:29:34 UTC
Hi Doron

Can you share steps to move Hosted Engine permanently once it's down?  I have not been able to find these anywhere.  If they're not officially supported, such as modifying database, then that's fine. I will assume responsibility.

Comment 5 Doron Fediuck 2019-01-10 16:57:38 UTC
(In reply to Douglas Duckworth from comment #4)
> Hi Doron
> 
> Can you share steps to move Hosted Engine permanently once it's down?  I
> have not been able to find these anywhere.  If they're not officially
> supported, such as modifying database, then that's fine. I will assume
> responsibility.

If you move a host to local maintenance in one cluster, the VM will move to a hosted-engine node in another cluster (assuming you have no available nodes in the original cluster).
It's an emergency solution which allows the VM to keep running and managing the system.

Comment 6 Douglas Duckworth 2019-01-10 17:25:03 UTC
Hi Doron

Yes, that's the current state.  "SharedStorage" cluster has no hosts whereas "SCU" has three hosts.  So I was able to run Hosted Engine on hosts within SCU cluster by stopping Hosted Engine then starting it again.  

Though in this state I cannot live migrate the Hosted Engine VM.

Comment 7 Douglas Duckworth 2019-01-14 15:55:23 UTC
Hi Doron

Can you please share steps on how I can migrate the hosted engine to the new cluster?  Can this be done by modifying database?

Comment 8 Douglas Duckworth 2019-01-15 20:57:14 UTC
Hello

Can someone please respond to this issue?

Comment 9 Ryan Barry 2019-01-16 17:20:43 UTC
Simone, Martin, any thoughts about how this can be accomplished?

Douglas, it's probably easiest to send a mail to users, since hosted engine crosses multiple teams (for installation/configuration, high availability, and management), and it doesn't always follow the normal mechanisms for migrating VMs.

Comment 10 Simone Tiraboschi 2019-01-16 17:56:31 UTC
(In reply to Ryan Barry from comment #9)
> Simone, Martin, any thoughts about how this can be accomplished?

ovirt-ha-agent is not really aware of engine clusters, on HE side you can simply:
- set global maintenance mode
- shutdown the engine VM
- manually restart it on a different HE host assigned to a different cluster

The issue now will probably come at engine level since the engine VM will be on a cluster in engine DB but running on an host of a different cluster and this can have a lot of side effects.

Comment 11 Ryan Barry 2019-01-16 18:01:17 UTC
While true, I'd suggest that we may want to convert this into an RFE for hosted-engine.

hosted-engine code in engine is somewhat special (even the lack of ability to migrate it), and we'd want the ability for hosted engine itself to check whether this is an allowable operation (whether the hosts being migrated to can reach HE storage, etc). So much of hosted engine is handled from a host side (sanlock, storage mapping, migration of the HE VM when maintenance is initiated) that it doesn't make sense to me for this to be handled by the normal virt flow.

Thoughts?

Comment 12 Douglas Duckworth 2019-01-16 18:57:16 UTC
Hi Ryan

I already emailed users but nobody replied.  So I think came here for feedback. 

Thanks everyone!

Since the solution isn't apparent I put my hosts back into the previous cluster.  I can now again live migrate the hosted engine VM.

Comment 13 Sandro Bonazzola 2019-01-23 09:09:17 UTC
Martin, what's your point of view on this?

Comment 14 Denis Prezhevalsky 2019-01-23 10:34:40 UTC
Hello,
I also hit exactly the same issue while trying to setup new RHV env.
I believe this issue should be addressed.

Comment 15 Martin Tessun 2019-01-30 08:59:58 UTC
(In reply to Sandro Bonazzola from comment #13)
> Martin, what's your point of view on this?

In a nutshell:
- A Cluster is a migration domain. Migrating to another cluster needs to take quite some precaution. As such it is disabled in the UI, which seems reasonable.
- You can still do cross-cluster migration from the REST-API (afaik).
- Having 2+ hosts per HE capable RHV cluster makes sense. You still have one "Hosted-Engine cluster", so restarting the HE on another cluster (and live-migrating withing) should be possible.
- Restarting the engine typically doesn't hurt, but Live Migration via REST-API across clusters should also work.

As that, I believe it should work already, we probably need to test that the flows really work and maybe provide some ansible scripts / automation for moving the HE between clusters.

Comment 16 Nikolai Sednev 2019-01-30 09:27:54 UTC
I will need to get more detailed data regarding implementation and usage of this functionality in order to properly prepare the test plan and execute sets.
Please provide your input.

Comment 17 Simone Tiraboschi 2019-01-30 09:49:15 UTC
The cluster as defined at engine level ensures that we are in proper migration domain; if we remove that boundaries we need to explicitly point out a list of preconditions to be manually checked in order to have this working without any side effect.

Now ovirt-ha-agent starts the engine VM taking its definition from the xml for libvirt in the OVF_STORE as generated by the engine.

And so we have something like:

<?xml version="1.0" encoding="UTF-8"?>
<domain xmlns:ovirt-tune="http://ovirt.org/vm/tune/1.0" xmlns:ovirt-vm="http://ovirt.org/vm/1.0" type="kvm">
   <name>HostedEngine</name>
   <uuid>99a5966e-23c4-45af-be9f-fa800a10b32c</uuid>
   <memory>4194304</memory>
   <currentMemory>4194304</currentMemory>
   <maxMemory slots="16">16777216</maxMemory>
   <vcpu current="4">16</vcpu>
   <sysinfo type="smbios">
      <system>
         <entry name="manufacturer">oVirt</entry>
         <entry name="product">OS-NAME:</entry>
         <entry name="version">OS-VERSION:</entry>
         <entry name="serial">HOST-SERIAL:</entry>
         <entry name="uuid">99a5966e-23c4-45af-be9f-fa800a10b32c</entry>
      </system>
   </sysinfo>
   <clock offset="variable" adjustment="0">
      <timer name="rtc" tickpolicy="catchup" />
      <timer name="pit" tickpolicy="delay" />
      <timer name="hpet" present="no" />
   </clock>
   <features>
      <acpi />
   </features>
   <cpu match="exact">
      <model>Haswell-noTSX</model>
      <topology cores="1" threads="1" sockets="16" />
      <numa>
         <cell id="0" cpus="0,1,2,3" memory="4194304" />
      </numa>
   </cpu>
   <cputune />
   <devices>
      <input type="tablet" bus="usb" />
      <channel type="unix">
         <target type="virtio" name="ovirt-guest-agent.0" />
         <source mode="bind" path="/var/lib/libvirt/qemu/channels/99a5966e-23c4-45af-be9f-fa800a10b32c.ovirt-guest-agent.0" />
      </channel>
      <channel type="unix">
         <target type="virtio" name="org.qemu.guest_agent.0" />
         <source mode="bind" path="/var/lib/libvirt/qemu/channels/99a5966e-23c4-45af-be9f-fa800a10b32c.org.qemu.guest_agent.0" />
      </channel>
      <rng model="virtio">
         <backend model="random">/dev/random</backend>
      </rng>
      <graphics type="vnc" port="-1" autoport="yes" passwd="*****" passwdValidTo="1970-01-01T00:00:01" keymap="en-us">
         <listen type="network" network="vdsm-ovirtmgmt" />
      </graphics>
      <controller type="scsi" model="virtio-scsi" index="0">
         <address bus="0x00" domain="0x0000" function="0x0" slot="0x04" type="pci" />
      </controller>
      <video>
         <model type="vga" vram="32768" heads="1" />
         <address bus="0x00" domain="0x0000" function="0x0" slot="0x02" type="pci" />
      </video>
      <console type="pty">
         <target type="virtio" port="0" />
         <alias name="ua-816e131e-5718-45e7-b1" />
      </console>
      <controller type="ide">
         <address bus="0x00" domain="0x0000" function="0x1" slot="0x01" type="pci" />
      </controller>
      <controller type="virtio-serial" index="0" ports="16">
         <address bus="0x00" domain="0x0000" function="0x0" slot="0x05" type="pci" />
      </controller>
      <controller type="usb" model="piix3-uhci" index="0">
         <address bus="0x00" domain="0x0000" function="0x2" slot="0x01" type="pci" />
      </controller>
      <memballoon model="none" />
      <interface type="bridge">
         <model type="virtio" />
         <link state="up" />
         <source bridge="ovirtmgmt" />
         <driver queues="4" name="vhost" />
         <alias name="ua-d2a5518b-ab50-4a10-8954-c2eb31503551" />
         <address bus="0x00" domain="0x0000" function="0x0" slot="0x03" type="pci" />
         <mac address="00:16:3e:08:b4:80" />
         <mtu size="1500" />
         <filterref filter="vdsm-no-mac-spoofing" />
         <bandwidth />
      </interface>
      <disk type="file" device="cdrom" snapshot="no">
         <driver name="qemu" type="raw" error_policy="report" />
         <source file="" startupPolicy="optional" />
         <target dev="hdc" bus="ide" />
         <readonly />
         <alias name="ua-b4186c18-71d4-4124-bc52-34b860a2e431" />
         <address bus="1" controller="0" unit="0" type="drive" target="0" />
      </disk>
      <disk snapshot="no" type="file" device="disk">
         <target dev="vda" bus="virtio" />
         <source file="/rhev/data-center/00000000-0000-0000-0000-000000000000/3d0881ab-4f59-4d49-a877-e9bf6764b316/images/9d19d485-05ec-4526-ab89-445136febe2b/f6f903e3-46d2-4fcb-99a8-9c6c9a495f0f" />
         <driver name="qemu" io="native" type="raw" error_policy="stop" cache="none" />
         <alias name="ua-9d19d485-05ec-4526-ab89-445136febe2b" />
         <address bus="0x00" domain="0x0000" function="0x0" slot="0x06" type="pci" />
         <serial>9d19d485-05ec-4526-ab89-445136febe2b</serial>
      </disk>
      <lease>
         <key>f6f903e3-46d2-4fcb-99a8-9c6c9a495f0f</key>
         <lockspace>3d0881ab-4f59-4d49-a877-e9bf6764b316</lockspace>
         <target offset="LEASE-OFFSET:f6f903e3-46d2-4fcb-99a8-9c6c9a495f0f:3d0881ab-4f59-4d49-a877-e9bf6764b316" path="LEASE-PATH:f6f903e3-46d2-4fcb-99a8-9c6c9a495f0f:3d0881ab-4f59-4d49-a877-e9bf6764b316" />
      </lease>
   </devices>
   <pm>
      <suspend-to-disk enabled="no" />
      <suspend-to-mem enabled="no" />
   </pm>
   <os>
      <type arch="x86_64" machine="pc-i440fx-rhel7.3.0">hvm</type>
      <smbios mode="sysinfo" />
   </os>
   <metadata>
      <ovirt-tune:qos />
      <ovirt-vm:vm>
         <minGuaranteedMemoryMb type="int">4096</minGuaranteedMemoryMb>
         <clusterVersion>4.2</clusterVersion>
         <ovirt-vm:custom />
         <ovirt-vm:device mac_address="00:16:3e:08:b4:80">
            <ovirt-vm:custom />
         </ovirt-vm:device>
         <ovirt-vm:device devtype="disk" name="vda">
            <ovirt-vm:poolID>00000000-0000-0000-0000-000000000000</ovirt-vm:poolID>
            <ovirt-vm:volumeID>f6f903e3-46d2-4fcb-99a8-9c6c9a495f0f</ovirt-vm:volumeID>
            <ovirt-vm:shared>exclusive</ovirt-vm:shared>
            <ovirt-vm:imageID>9d19d485-05ec-4526-ab89-445136febe2b</ovirt-vm:imageID>
            <ovirt-vm:domainID>3d0881ab-4f59-4d49-a877-e9bf6764b316</ovirt-vm:domainID>
         </ovirt-vm:device>
         <launchPaused>false</launchPaused>
         <resumeBehavior>auto_resume</resumeBehavior>
      </ovirt-vm:vm>
   </metadata>
</domain>

We need to be sure somehow that an host in the second cluster is able to start a VM like that.

Comment 18 Nikolai Sednev 2019-01-30 10:20:54 UTC
(In reply to Simone Tiraboschi from comment #17)
> The cluster as defined at engine level ensures that we are in proper
> migration domain; if we remove that boundaries we need to explicitly point
> out a list of preconditions to be manually checked in order to have this
> working without any side effect.
> 
> Now ovirt-ha-agent starts the engine VM taking its definition from the xml
> for libvirt in the OVF_STORE as generated by the engine.
> 
> And so we have something like:
> 
> <?xml version="1.0" encoding="UTF-8"?>
> <domain xmlns:ovirt-tune="http://ovirt.org/vm/tune/1.0"
> xmlns:ovirt-vm="http://ovirt.org/vm/1.0" type="kvm">
>    <name>HostedEngine</name>
>    <uuid>99a5966e-23c4-45af-be9f-fa800a10b32c</uuid>
>    <memory>4194304</memory>
>    <currentMemory>4194304</currentMemory>
>    <maxMemory slots="16">16777216</maxMemory>
>    <vcpu current="4">16</vcpu>
>    <sysinfo type="smbios">
>       <system>
>          <entry name="manufacturer">oVirt</entry>
>          <entry name="product">OS-NAME:</entry>
>          <entry name="version">OS-VERSION:</entry>
>          <entry name="serial">HOST-SERIAL:</entry>
>          <entry name="uuid">99a5966e-23c4-45af-be9f-fa800a10b32c</entry>
>       </system>
>    </sysinfo>
>    <clock offset="variable" adjustment="0">
>       <timer name="rtc" tickpolicy="catchup" />
>       <timer name="pit" tickpolicy="delay" />
>       <timer name="hpet" present="no" />
>    </clock>
>    <features>
>       <acpi />
>    </features>
>    <cpu match="exact">
>       <model>Haswell-noTSX</model>
>       <topology cores="1" threads="1" sockets="16" />
>       <numa>
>          <cell id="0" cpus="0,1,2,3" memory="4194304" />
>       </numa>
>    </cpu>
>    <cputune />
>    <devices>
>       <input type="tablet" bus="usb" />
>       <channel type="unix">
>          <target type="virtio" name="ovirt-guest-agent.0" />
>          <source mode="bind"
> path="/var/lib/libvirt/qemu/channels/99a5966e-23c4-45af-be9f-fa800a10b32c.
> ovirt-guest-agent.0" />
>       </channel>
>       <channel type="unix">
>          <target type="virtio" name="org.qemu.guest_agent.0" />
>          <source mode="bind"
> path="/var/lib/libvirt/qemu/channels/99a5966e-23c4-45af-be9f-fa800a10b32c.
> org.qemu.guest_agent.0" />
>       </channel>
>       <rng model="virtio">
>          <backend model="random">/dev/random</backend>
>       </rng>
>       <graphics type="vnc" port="-1" autoport="yes" passwd="*****"
> passwdValidTo="1970-01-01T00:00:01" keymap="en-us">
>          <listen type="network" network="vdsm-ovirtmgmt" />
>       </graphics>
>       <controller type="scsi" model="virtio-scsi" index="0">
>          <address bus="0x00" domain="0x0000" function="0x0" slot="0x04"
> type="pci" />
>       </controller>
>       <video>
>          <model type="vga" vram="32768" heads="1" />
>          <address bus="0x00" domain="0x0000" function="0x0" slot="0x02"
> type="pci" />
>       </video>
>       <console type="pty">
>          <target type="virtio" port="0" />
>          <alias name="ua-816e131e-5718-45e7-b1" />
>       </console>
>       <controller type="ide">
>          <address bus="0x00" domain="0x0000" function="0x1" slot="0x01"
> type="pci" />
>       </controller>
>       <controller type="virtio-serial" index="0" ports="16">
>          <address bus="0x00" domain="0x0000" function="0x0" slot="0x05"
> type="pci" />
>       </controller>
>       <controller type="usb" model="piix3-uhci" index="0">
>          <address bus="0x00" domain="0x0000" function="0x2" slot="0x01"
> type="pci" />
>       </controller>
>       <memballoon model="none" />
>       <interface type="bridge">
>          <model type="virtio" />
>          <link state="up" />
>          <source bridge="ovirtmgmt" />
>          <driver queues="4" name="vhost" />
>          <alias name="ua-d2a5518b-ab50-4a10-8954-c2eb31503551" />
>          <address bus="0x00" domain="0x0000" function="0x0" slot="0x03"
> type="pci" />
>          <mac address="00:16:3e:08:b4:80" />
>          <mtu size="1500" />
>          <filterref filter="vdsm-no-mac-spoofing" />
>          <bandwidth />
>       </interface>
>       <disk type="file" device="cdrom" snapshot="no">
>          <driver name="qemu" type="raw" error_policy="report" />
>          <source file="" startupPolicy="optional" />
>          <target dev="hdc" bus="ide" />
>          <readonly />
>          <alias name="ua-b4186c18-71d4-4124-bc52-34b860a2e431" />
>          <address bus="1" controller="0" unit="0" type="drive" target="0" />
>       </disk>
>       <disk snapshot="no" type="file" device="disk">
>          <target dev="vda" bus="virtio" />
>          <source
> file="/rhev/data-center/00000000-0000-0000-0000-000000000000/3d0881ab-4f59-
> 4d49-a877-e9bf6764b316/images/9d19d485-05ec-4526-ab89-445136febe2b/f6f903e3-
> 46d2-4fcb-99a8-9c6c9a495f0f" />
>          <driver name="qemu" io="native" type="raw" error_policy="stop"
> cache="none" />
>          <alias name="ua-9d19d485-05ec-4526-ab89-445136febe2b" />
>          <address bus="0x00" domain="0x0000" function="0x0" slot="0x06"
> type="pci" />
>          <serial>9d19d485-05ec-4526-ab89-445136febe2b</serial>
>       </disk>
>       <lease>
>          <key>f6f903e3-46d2-4fcb-99a8-9c6c9a495f0f</key>
>          <lockspace>3d0881ab-4f59-4d49-a877-e9bf6764b316</lockspace>
>          <target
> offset="LEASE-OFFSET:f6f903e3-46d2-4fcb-99a8-9c6c9a495f0f:3d0881ab-4f59-4d49-
> a877-e9bf6764b316"
> path="LEASE-PATH:f6f903e3-46d2-4fcb-99a8-9c6c9a495f0f:3d0881ab-4f59-4d49-
> a877-e9bf6764b316" />
>       </lease>
>    </devices>
>    <pm>
>       <suspend-to-disk enabled="no" />
>       <suspend-to-mem enabled="no" />
>    </pm>
>    <os>
>       <type arch="x86_64" machine="pc-i440fx-rhel7.3.0">hvm</type>
>       <smbios mode="sysinfo" />
>    </os>
>    <metadata>
>       <ovirt-tune:qos />
>       <ovirt-vm:vm>
>          <minGuaranteedMemoryMb type="int">4096</minGuaranteedMemoryMb>
>          <clusterVersion>4.2</clusterVersion>
>          <ovirt-vm:custom />
>          <ovirt-vm:device mac_address="00:16:3e:08:b4:80">
>             <ovirt-vm:custom />
>          </ovirt-vm:device>
>          <ovirt-vm:device devtype="disk" name="vda">
>            
> <ovirt-vm:poolID>00000000-0000-0000-0000-000000000000</ovirt-vm:poolID>
>            
> <ovirt-vm:volumeID>f6f903e3-46d2-4fcb-99a8-9c6c9a495f0f</ovirt-vm:volumeID>
>             <ovirt-vm:shared>exclusive</ovirt-vm:shared>
>            
> <ovirt-vm:imageID>9d19d485-05ec-4526-ab89-445136febe2b</ovirt-vm:imageID>
>            
> <ovirt-vm:domainID>3d0881ab-4f59-4d49-a877-e9bf6764b316</ovirt-vm:domainID>
>          </ovirt-vm:device>
>          <launchPaused>false</launchPaused>
>          <resumeBehavior>auto_resume</resumeBehavior>
>       </ovirt-vm:vm>
>    </metadata>
> </domain>
> 
> We need to be sure somehow that an host in the second cluster is able to
> start a VM like that.

1.Will the migration be possible from CLI or UI or both?
2.Will there be premigration checks before the migration?
3.In case of failed migration, what is expected to happen to the engine?

Comment 19 Martin Tessun 2019-02-05 14:56:19 UTC
(In reply to Nikolai Sednev from comment #18)

> 1.Will the migration be possible from CLI or UI or both?

I would expect CLI (REST API) only.

> 2.Will there be premigration checks before the migration?

In case we get the Ansible modules, I expect that this module could do some prechecks, but overall this is in the responsibility of the Admin mainly.

> 3.In case of failed migration, what is expected to happen to the engine?

Depends on the error, so one of:
- Engine continues running on the source host
- Engine is killed and restarted on one available host

At least this would be my expectation here.

Comment 20 Douglas Duckworth 2019-11-05 14:57:39 UTC
Thank you for working on this feature!

Comment 21 Marina Kalinin 2020-06-17 01:58:41 UTC
I am trying to think about a scenario that we would like to move HE VM to another cluster. And if this request is legitimate or not.
HE VM should reside only inside a single HE cluster, that was deployed as a such and has dedicated daemons running on it, doing HE jobs for HE VM.

What happens when HE VM disappears from that cluster and moves to another cluster? Would it even have the HE deployed?
Sounds wrong behavior to me and I suggest to not open the pandora box by implementing this functionality, since it is beyond the original design.

Comment 22 Ricardo Alonso 2020-07-10 10:06:15 UTC
(In reply to Marina Kalinin from comment #21)
> I am trying to think about a scenario that we would like to move HE VM to
> another cluster. And if this request is legitimate or not.
> HE VM should reside only inside a single HE cluster, that was deployed as a
> such and has dedicated daemons running on it, doing HE jobs for HE VM.

There are several:
- Decommissioning 
- Better High availability
- Maintenance
- Upgrade
  
 
> What happens when HE VM disappears from that cluster and moves to another
> cluster? Would it even have the HE deployed?
> Sounds wrong behavior to me and I suggest to not open the pandora box by
> implementing this functionality, since it is beyond the original design.

The HM isn't a regular VM for ovirt. Inside the manager you have minimal 
control over it. The VM status is controlled by the ovirt-ha-agent and ovirt-ha-broker
daemons. So the cluster it's only a Live migration area to improve the high 
availability. 

We just need a process that will do the following:
# hosted-engine --set-maintenance --mode=global 
# hosted-engine --vm-shutdown
# hosted-engine --vm-change-cluster <cluster_name>     <-- this is what we need
# hosted-engine --vm-start
# hosted-engine --set-maintenance --mode=none

Comment 23 Simone Tiraboschi 2020-08-07 12:06:05 UTC
Martin properly replied in https://bugzilla.redhat.com/show_bug.cgi?id=1664777#c19

Comment 24 Sandro Bonazzola 2020-10-29 08:54:37 UTC
Today adding a hosted engine host to a different cluster is not possible anymore so this flow is not relevant anymore.


Note You need to log in before you can comment on or make changes to this bug.