Description of problem:
xm migrate ...
...will cause rgmanager to think the VM was stopped on the node it was running
on and restart it. If migration succeeded and the VM is restarted, this ends up
with two copies of the VM running on the cluster.
The thought was "don't do that", but we can protect against this in the
For a service which is "migratory" - i.e. a VM with no children, if status check
fails, we can then:
* ask each node to check for the "migratory" service.
* if it turns up, we can simply update rgmanager's state to reflect the new owner.
* if it does not turn up, take recovery as normal
Fixing Product Name. Cluster Suite was integrated into the Enterprise Linux for
Warning: testing of this is dangerous.
On 5.0, if you migrate a virtual machine off of a host node while rgmanager is
watching the VM, rgmanager will see this as a failure and restart the VM -
potentially ending up with the VM running in two places on the cluster.
On 5.1, if you migrate a VM off of a host node, rgmanager will first query other
nodes in the cluster - and then restart it.
Additionally, if you create a VM on a cluster node running rgmanager - and
rgmanager has an associated service, a subsequent 'enable' request will not
start the VM a second time - it will simply flag the VM as 'started' on the node
which is already running it.
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on the solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.