Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Created attachment 419407[details]
gdb logs of both server during the hang
Description of problem:
trying out concurrent bi-directional migration causes libvirt to hang.
this is reproducible and happened 3 times so far.
scenario goes as follows:
- 2 running hosts
- 2 running vms
== vm 1 is on host 1
== vm 2 is on host 2
- using rhev-m - perform concurrent migration meaning:
== vm 1 which runs on host 1 is destined to run on host 2
== vm 2 which runs on host 2 is destined to run on host 1
attached information:
1) libvirtd.log on both servers (white-vdse and pink-nehalem2)
2) gdb trace information (attached to hang process
vdsm-4.9-7.el6.x86_64
libvirt-0.8.1-7.el6.x86_64
qemu-kvm-0.12.1.2-2.68.el6.x86_64
2.6.32-31.el6.x86_64
> attached information:
>
> 1) libvirtd.log on both servers (white-vdse and pink-nehalem2)
> 2) gdb trace information (attached to hang process
Seems like you wanted to attach libvirtd log but forgot to do so in the end. Could you, please, provide it?
The problem is this..
With PEER2PEER migration, the source libvirtd makes API calls into the destination libvirtd. Unfortunately it is holding the qemu driver lock, and virDomainObj lock while doing this. If the other libvirtd is also trying to call back into this libvirtd, it also holds the qemu driver + domain obj locks. Deadlock is ensured
As we do when interacting with the QEMU monitor, we need to add calls to BeginJob/EndJob and also an equivalent to EnterMonitor/LeaveMonitor around every API call to the remote libvirtd. This ensures we release all locks while doing API calls.
This impacts the methods doPeer2PeerMigrate, doNonTunnelMigrate, doTunnelMigrate and doTunnelSendAll.
Looking at the code, I think we already have the BeginJob/EndJob calls (though it's not exactly trivial to follow it). When we come into qemudDomainMigratePerform(), we lock the driver, then lock the virDomainObj (via virDomainFindByUUID()), then call qemuDomainObjBeginJobWithDriver(). After that we call doPeer2PeerMigrate(), so we already have the job "started". However, qemuDomainObjBeginJobWithDriver just starts the job; it is still holding onto the driver lock and the domain obj lock after it exits. What we need is exactly what Dan mentioned in the second part of his comment; an equivalent to EnterMonitor/LeaveMonitor, called something like "EnterRemoteLibvirt/LeaveRemoteLibvirt", that will take an additional reference (to keep things safe) and drop both the driver lock and the obj lock.
Chris Lalancette
verified running with the following versions:
qemu-kvm-0.12.1.2-2.97.el6.x86_64
vdsm-4.9-11.el6.x86_64
2.6.32-52.el6.x86_64
executed several concurrent bi-directional migrations, all vms were successfully migrated to their destination host, virsh didn't hang.
fixed.
and of course libvirt version: libvirt-0.8.1-19.el6.x86_64
Comment 16releng-rhel@redhat.com
2010-11-11 14:48:53 UTC
Red Hat Enterprise Linux 6.0 is now available and should resolve
the problem described in this bug report. This report is therefore being closed
with a resolution of CURRENTRELEASE. You may reopen this bug report if the
solution does not work for you.
Created attachment 419407 [details] gdb logs of both server during the hang Description of problem: trying out concurrent bi-directional migration causes libvirt to hang. this is reproducible and happened 3 times so far. scenario goes as follows: - 2 running hosts - 2 running vms == vm 1 is on host 1 == vm 2 is on host 2 - using rhev-m - perform concurrent migration meaning: == vm 1 which runs on host 1 is destined to run on host 2 == vm 2 which runs on host 2 is destined to run on host 1 attached information: 1) libvirtd.log on both servers (white-vdse and pink-nehalem2) 2) gdb trace information (attached to hang process vdsm-4.9-7.el6.x86_64 libvirt-0.8.1-7.el6.x86_64 qemu-kvm-0.12.1.2-2.68.el6.x86_64 2.6.32-31.el6.x86_64