Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1097227 - VM migration in RHEV environment failed due to libvirt error "Failed to inquire lock: No such process"
VM migration in RHEV environment failed due to libvirt error "Failed to inqui...
Status: CLOSED ERRATA
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: libvirt (Show other bugs)
6.5
x86_64 Linux
high Severity high
: rc
: ---
Assigned To: Martin Kletzander
Virtualization Bugs
: ZStream
Depends On: 1088034
Blocks:
  Show dependency treegraph
 
Reported: 2014-05-13 08:14 EDT by Chris Pelland
Modified: 2014-05-27 12:27 EDT (History)
13 users (show)

See Also:
Fixed In Version: libvirt-0.10.2-29.el6_5.8
Doc Type: Bug Fix
Doc Text:
Cause: Libvirt was not checking whether QEMU domain process is registered when using sanlock. Consequence: When VDSM was updated, set up sanlock for the host and libvirt and restarted libvirt daemon, any domain that was before the sanlock configuration and daemon restart could not have been migrated. Fix: Libvirt is now checking whether QEMU domain process is registered on sanlock or not before working with it. Result: Migration works in scenario described above.
Story Points: ---
Clone Of:
Environment:
Last Closed: 2014-05-27 12:27:18 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2014:0560 normal SHIPPED_LIVE Moderate: libvirt security and bug fix update 2014-05-27 16:25:33 EDT

  None (edit)
Description Chris Pelland 2014-05-13 08:14:19 EDT
This bug has been copied from bug #1088034 and has been proposed
to be backported to 6.5 z-stream (EUS).
Comment 6 zhenfeng wang 2014-05-15 04:51:05 EDT
I can reproduce this bug with the libvirt-0.10.2-29.el6_5.7.x86_64 with the pure libvirt env

steps
1. start a guest with default configuration, that means with sanlock disabled.

2. enable sanlock by edit qemu.conf:
lock_manager = "sanlock"

and edit qemu-sanlock.conf:
auto_disk_leases = 0
require_lease_for_disks = 0

3. restart libvirtd.

4. do migrate for the guest:
# virsh migrate rhel651 qemu+ssh://$target_ip/system --verbose 
error: Failed to inquire lock: No such process

verify this bug with libvirt-0.10.2-29.el6_5.8

1.Retest the step4 with libvirt-0.10.2-29.el6_5.8, the guest can be migrated successfully
# virsh migrate rhel651 qemu+ssh://$target_ip/system --verbose
Migration: [100 %]

2.Migrate the guest back to the source, will fail to migrate, this was the expect result since we didn't configure the proper sanlock in the source
# virsh migrate --live rhel651 qemu+ssh://$source_ip/system --verbose
root@source_ip's password: 
error: Child quit during startup handshake: Input/output error

3.Configure the proper sanlock in the source, then re-migrate the guest back to the target, the guest can be migrated successfully

# getsebool -a | grep sanlock
sanlock_use_fusefs --> off
sanlock_use_nfs --> on
sanlock_use_samba --> off
virt_use_sanlock --> on

#cat /etc/libvirt/qemu.conf
lock_manager = "sanlock"

# tail -5 /etc/libvirt/qemu-sanlock.conf 
user = "sanlock"
group = "sanlock"
host_id = 1
auto_disk_leases = 1
disk_lease_dir = "/var/lib/libvirt/sanlock"

#service wdmd start
#service sanlock start

#service libvirtd restart

# virsh migrate --live rhel651 qemu+ssh://$source_ip/system --verbose
root@$source_ip's password: 
Migration: [100 %]

Base the upper steps mark this bug verified
Comment 8 errata-xmlrpc 2014-05-27 12:27:18 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHSA-2014-0560.html

Note You need to log in before you can comment on or make changes to this bug.