58738 merged -> MODIFIED
the Vdsm changes do not require doc_string updates.
one more petch needs to get in:) https://gerrit.ovirt.org/#/c/58465/
*** Bug 1337203 has been marked as a duplicate of this bug. ***
Verification builds: rhevm-3.6.7.5-0.1.el6 libvirt-client-1.2.17-13.el7_2.5.x86_64 qemu-kvm-rhev-2.3.0-31.el7_2.16.x86_64 vdsm-4.17.31-0.el7ev.noarch sanlock-3.2.4-2.el7_2.x86_64 Verification scenarios: # Add 60 seconds sleep /usr/share/vdsm/clientIf.py (the scenario of reproducing this bug before the fix): 1. Use 2 hosts under the same cluster, on SPM host edit /usr/share/vdsm/clientIf.p and add time.sleep(60) under def _recoverExistingVms(self): 2. enable HA on VM. 3. Run VM. 4. Restart vdsmd service (look for "VM is running in db and not running in VDS 'hostname'" in engine.log). 5. Verify VM is not migrating to the second host. After VDSM service restarted, verify same qemu-kvm process is running on SPM host and verify no qemu-kvm process for same VM on the second host. Verify VM continue to run properly. # Stop VDSM service: 1. Stop VDSM service on the host with running VM. 2. Wait for host to become non-responsive and VM in unknown state. 3. Verify soft fencing started on the host and VM status restored to up. 4. Verify VM continue to run properly. # Power off host: 1. Power off host with VM running on it. 2. Wait for host to become in non-responsive state and VM in unknown state. 3. From webadmin confirm 'host has been rebooted'. 4. Verify VM is migrating to the active host and VM is restarting.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2016:1364
*** Bug 1452393 has been marked as a duplicate of this bug. ***